Test Report: Docker_Linux_crio_arm64 17830

                    
                      f2d99d5d3acbee63fb92e6e0c0b75bbff35f3ad4:2024-01-09:32615
                    
                

Test fail (7/316)

Order failed test Duration
35 TestAddons/parallel/Ingress 168.58
167 TestIngressAddonLegacy/serial/ValidateIngressAddons 178.37
217 TestMultiNode/serial/PingHostFrom2Pods 3.98
235 TestScheduledStopUnix 38.46
239 TestRunningBinaryUpgrade 114.77
242 TestMissingContainerUpgrade 147.01
253 TestStoppedBinaryUpgrade/Upgrade 88.67
x
+
TestAddons/parallel/Ingress (168.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-983119 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-983119 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-983119 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [bf2440d0-25f9-4766-b965-c7bef823fa5f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [bf2440d0-25f9-4766-b965-c7bef823fa5f] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004507041s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-983119 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-983119 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.29969187s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-983119 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-983119 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.052419912s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-983119 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-983119 addons disable ingress-dns --alsologtostderr -v=1: (1.258324753s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-983119 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-983119 addons disable ingress --alsologtostderr -v=1: (7.790126621s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-983119
helpers_test.go:235: (dbg) docker inspect addons-983119:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ab5012f14c1aa6f44a92723f0a432b95403f9292cb27652c9d65d74e31ed939",
	        "Created": "2024-01-09T00:01:55.648497723Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1684996,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-09T00:01:55.990262324Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a5be0745bf7211988da1521fe4ee64cb5f5dee2ca8e3061f061c5272199c616c",
	        "ResolvConfPath": "/var/lib/docker/containers/5ab5012f14c1aa6f44a92723f0a432b95403f9292cb27652c9d65d74e31ed939/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ab5012f14c1aa6f44a92723f0a432b95403f9292cb27652c9d65d74e31ed939/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ab5012f14c1aa6f44a92723f0a432b95403f9292cb27652c9d65d74e31ed939/hosts",
	        "LogPath": "/var/lib/docker/containers/5ab5012f14c1aa6f44a92723f0a432b95403f9292cb27652c9d65d74e31ed939/5ab5012f14c1aa6f44a92723f0a432b95403f9292cb27652c9d65d74e31ed939-json.log",
	        "Name": "/addons-983119",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-983119:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-983119",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c9b6267f046476d559fa9d64c45b18f976569f1abe758b943d7bf00b74e57ab4-init/diff:/var/lib/docker/overlay2/a443ad727e446e5b332ea48292deac5ef22cb43b6aa42ee65e414679b2407c31/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c9b6267f046476d559fa9d64c45b18f976569f1abe758b943d7bf00b74e57ab4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c9b6267f046476d559fa9d64c45b18f976569f1abe758b943d7bf00b74e57ab4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c9b6267f046476d559fa9d64c45b18f976569f1abe758b943d7bf00b74e57ab4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-983119",
	                "Source": "/var/lib/docker/volumes/addons-983119/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-983119",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-983119",
	                "name.minikube.sigs.k8s.io": "addons-983119",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a139fa4de1b4335395a4dad5489ad3b8130f79070eb81c279e4d2b10c4e423e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34369"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34368"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34365"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34367"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34366"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6a139fa4de1b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-983119": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5ab5012f14c1",
	                        "addons-983119"
	                    ],
	                    "NetworkID": "59e4ab69f74e3626a12a60fd634fcd82ec13d843765f10eac1635d0b24f1d108",
	                    "EndpointID": "a2c987fccbc842655787229f7a4bc457fc831edcac565b7b9851abe38be2e323",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-983119 -n addons-983119
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-983119 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-983119 logs -n 25: (1.569075829s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	| delete  | -p download-only-345068                                                                     | download-only-345068   | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	| delete  | -p download-only-345068                                                                     | download-only-345068   | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	| start   | --download-only -p                                                                          | download-docker-594175 | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC |                     |
	|         | download-docker-594175                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-594175                                                                   | download-docker-594175 | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-421596   | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC |                     |
	|         | binary-mirror-421596                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:35921                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-421596                                                                     | binary-mirror-421596   | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:01 UTC |
	| addons  | enable dashboard -p                                                                         | addons-983119          | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC |                     |
	|         | addons-983119                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-983119          | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC |                     |
	|         | addons-983119                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-983119 --wait=true                                                                | addons-983119          | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC | 09 Jan 24 00:04 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-983119 ip                                                                            | addons-983119          | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:04 UTC |
	| addons  | addons-983119 addons disable                                                                | addons-983119          | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:04 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-983119 addons                                                                        | addons-983119          | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:04 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-983119          | jenkins | v1.32.0 | 09 Jan 24 00:04 UTC | 09 Jan 24 00:04 UTC |
	|         | addons-983119                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-983119 ssh curl -s                                                                   | addons-983119          | jenkins | v1.32.0 | 09 Jan 24 00:05 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-983119 addons                                                                        | addons-983119          | jenkins | v1.32.0 | 09 Jan 24 00:05 UTC | 09 Jan 24 00:05 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-983119 addons                                                                        | addons-983119          | jenkins | v1.32.0 | 09 Jan 24 00:05 UTC | 09 Jan 24 00:05 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-983119          | jenkins | v1.32.0 | 09 Jan 24 00:05 UTC | 09 Jan 24 00:05 UTC |
	|         | -p addons-983119                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-983119 ssh cat                                                                       | addons-983119          | jenkins | v1.32.0 | 09 Jan 24 00:05 UTC | 09 Jan 24 00:05 UTC |
	|         | /opt/local-path-provisioner/pvc-0fb851d4-2568-488a-8306-8d95aae72b4e_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-983119 addons disable                                                                | addons-983119          | jenkins | v1.32.0 | 09 Jan 24 00:05 UTC | 09 Jan 24 00:06 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-983119          | jenkins | v1.32.0 | 09 Jan 24 00:06 UTC | 09 Jan 24 00:06 UTC |
	|         | addons-983119                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-983119          | jenkins | v1.32.0 | 09 Jan 24 00:06 UTC | 09 Jan 24 00:06 UTC |
	|         | -p addons-983119                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-983119 ip                                                                            | addons-983119          | jenkins | v1.32.0 | 09 Jan 24 00:07 UTC | 09 Jan 24 00:07 UTC |
	| addons  | addons-983119 addons disable                                                                | addons-983119          | jenkins | v1.32.0 | 09 Jan 24 00:07 UTC | 09 Jan 24 00:07 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-983119 addons disable                                                                | addons-983119          | jenkins | v1.32.0 | 09 Jan 24 00:07 UTC | 09 Jan 24 00:07 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/09 00:01:32
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0109 00:01:32.668868 1684539 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:01:32.669030 1684539 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:01:32.669057 1684539 out.go:309] Setting ErrFile to fd 2...
	I0109 00:01:32.669074 1684539 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:01:32.669373 1684539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-1678586/.minikube/bin
	I0109 00:01:32.669862 1684539 out.go:303] Setting JSON to false
	I0109 00:01:32.670786 1684539 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":24235,"bootTime":1704734258,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0109 00:01:32.670868 1684539 start.go:138] virtualization:  
	I0109 00:01:32.673389 1684539 out.go:177] * [addons-983119] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0109 00:01:32.675882 1684539 out.go:177]   - MINIKUBE_LOCATION=17830
	I0109 00:01:32.677817 1684539 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:01:32.676013 1684539 notify.go:220] Checking for updates...
	I0109 00:01:32.681447 1684539 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-1678586/kubeconfig
	I0109 00:01:32.683366 1684539 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-1678586/.minikube
	I0109 00:01:32.685226 1684539 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0109 00:01:32.686983 1684539 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0109 00:01:32.689224 1684539 driver.go:392] Setting default libvirt URI to qemu:///system
	I0109 00:01:32.713121 1684539 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0109 00:01:32.713246 1684539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0109 00:01:32.794297 1684539 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-09 00:01:32.784704789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0109 00:01:32.794399 1684539 docker.go:295] overlay module found
	I0109 00:01:32.797928 1684539 out.go:177] * Using the docker driver based on user configuration
	I0109 00:01:32.799997 1684539 start.go:298] selected driver: docker
	I0109 00:01:32.800021 1684539 start.go:902] validating driver "docker" against <nil>
	I0109 00:01:32.800034 1684539 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0109 00:01:32.800673 1684539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0109 00:01:32.871244 1684539 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-09 00:01:32.861841077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0109 00:01:32.871412 1684539 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0109 00:01:32.871657 1684539 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0109 00:01:32.873487 1684539 out.go:177] * Using Docker driver with root privileges
	I0109 00:01:32.875289 1684539 cni.go:84] Creating CNI manager for ""
	I0109 00:01:32.875328 1684539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0109 00:01:32.875345 1684539 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0109 00:01:32.875355 1684539 start_flags.go:323] config:
	{Name:addons-983119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-983119 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:01:32.878942 1684539 out.go:177] * Starting control plane node addons-983119 in cluster addons-983119
	I0109 00:01:32.880917 1684539 cache.go:121] Beginning downloading kic base image for docker with crio
	I0109 00:01:32.882882 1684539 out.go:177] * Pulling base image v0.0.42-1704751654-17830 ...
	I0109 00:01:32.884833 1684539 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:01:32.884891 1684539 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0109 00:01:32.884903 1684539 cache.go:56] Caching tarball of preloaded images
	I0109 00:01:32.884923 1684539 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon
	I0109 00:01:32.884989 1684539 preload.go:174] Found /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0109 00:01:32.884999 1684539 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0109 00:01:32.885347 1684539 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/config.json ...
	I0109 00:01:32.885369 1684539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/config.json: {Name:mka970a360b48037bdcf491fd96ef0d163acb438 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:01:32.904750 1684539 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 to local cache
	I0109 00:01:32.904948 1684539 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local cache directory
	I0109 00:01:32.904976 1684539 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local cache directory, skipping pull
	I0109 00:01:32.904986 1684539 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 exists in cache, skipping pull
	I0109 00:01:32.905008 1684539 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 as a tarball
	I0109 00:01:32.905044 1684539 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 from local cache
	I0109 00:01:48.691440 1684539 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 from cached tarball
	I0109 00:01:48.691479 1684539 cache.go:194] Successfully downloaded all kic artifacts
	I0109 00:01:48.691551 1684539 start.go:365] acquiring machines lock for addons-983119: {Name:mkfc8ae80832f1c0312be37cb3cfdf766937698b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:01:48.691666 1684539 start.go:369] acquired machines lock for "addons-983119" in 92.415µs
	I0109 00:01:48.691697 1684539 start.go:93] Provisioning new machine with config: &{Name:addons-983119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-983119 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:01:48.691782 1684539 start.go:125] createHost starting for "" (driver="docker")
	I0109 00:01:48.694892 1684539 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0109 00:01:48.695156 1684539 start.go:159] libmachine.API.Create for "addons-983119" (driver="docker")
	I0109 00:01:48.695212 1684539 client.go:168] LocalClient.Create starting
	I0109 00:01:48.695318 1684539 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem
	I0109 00:01:48.963519 1684539 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem
	I0109 00:01:49.385034 1684539 cli_runner.go:164] Run: docker network inspect addons-983119 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0109 00:01:49.401629 1684539 cli_runner.go:211] docker network inspect addons-983119 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0109 00:01:49.401722 1684539 network_create.go:281] running [docker network inspect addons-983119] to gather additional debugging logs...
	I0109 00:01:49.401746 1684539 cli_runner.go:164] Run: docker network inspect addons-983119
	W0109 00:01:49.418269 1684539 cli_runner.go:211] docker network inspect addons-983119 returned with exit code 1
	I0109 00:01:49.418302 1684539 network_create.go:284] error running [docker network inspect addons-983119]: docker network inspect addons-983119: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-983119 not found
	I0109 00:01:49.418315 1684539 network_create.go:286] output of [docker network inspect addons-983119]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-983119 not found
	
	** /stderr **
	I0109 00:01:49.418412 1684539 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0109 00:01:49.435896 1684539 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002955720}
	I0109 00:01:49.435936 1684539 network_create.go:124] attempt to create docker network addons-983119 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0109 00:01:49.435999 1684539 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-983119 addons-983119
	I0109 00:01:49.504376 1684539 network_create.go:108] docker network addons-983119 192.168.49.0/24 created
	I0109 00:01:49.504416 1684539 kic.go:121] calculated static IP "192.168.49.2" for the "addons-983119" container
	I0109 00:01:49.504504 1684539 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0109 00:01:49.521230 1684539 cli_runner.go:164] Run: docker volume create addons-983119 --label name.minikube.sigs.k8s.io=addons-983119 --label created_by.minikube.sigs.k8s.io=true
	I0109 00:01:49.543954 1684539 oci.go:103] Successfully created a docker volume addons-983119
	I0109 00:01:49.544042 1684539 cli_runner.go:164] Run: docker run --rm --name addons-983119-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-983119 --entrypoint /usr/bin/test -v addons-983119:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -d /var/lib
	I0109 00:01:51.417166 1684539 cli_runner.go:217] Completed: docker run --rm --name addons-983119-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-983119 --entrypoint /usr/bin/test -v addons-983119:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -d /var/lib: (1.873074926s)
	I0109 00:01:51.417197 1684539 oci.go:107] Successfully prepared a docker volume addons-983119
	I0109 00:01:51.417230 1684539 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:01:51.417253 1684539 kic.go:194] Starting extracting preloaded images to volume ...
	I0109 00:01:51.417337 1684539 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-983119:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -I lz4 -xf /preloaded.tar -C /extractDir
	I0109 00:01:55.561145 1684539 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-983119:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -I lz4 -xf /preloaded.tar -C /extractDir: (4.14375622s)
	I0109 00:01:55.561178 1684539 kic.go:203] duration metric: took 4.143922 seconds to extract preloaded images to volume
	W0109 00:01:55.561330 1684539 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0109 00:01:55.561449 1684539 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0109 00:01:55.630777 1684539 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-983119 --name addons-983119 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-983119 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-983119 --network addons-983119 --ip 192.168.49.2 --volume addons-983119:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617
	I0109 00:01:55.999757 1684539 cli_runner.go:164] Run: docker container inspect addons-983119 --format={{.State.Running}}
	I0109 00:01:56.020207 1684539 cli_runner.go:164] Run: docker container inspect addons-983119 --format={{.State.Status}}
	I0109 00:01:56.045987 1684539 cli_runner.go:164] Run: docker exec addons-983119 stat /var/lib/dpkg/alternatives/iptables
	I0109 00:01:56.114716 1684539 oci.go:144] the created container "addons-983119" has a running status.
	I0109 00:01:56.114749 1684539 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/addons-983119/id_rsa...
	I0109 00:01:56.317756 1684539 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/addons-983119/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0109 00:01:56.345245 1684539 cli_runner.go:164] Run: docker container inspect addons-983119 --format={{.State.Status}}
	I0109 00:01:56.371596 1684539 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0109 00:01:56.371616 1684539 kic_runner.go:114] Args: [docker exec --privileged addons-983119 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0109 00:01:56.446955 1684539 cli_runner.go:164] Run: docker container inspect addons-983119 --format={{.State.Status}}
	I0109 00:01:56.479549 1684539 machine.go:88] provisioning docker machine ...
	I0109 00:01:56.479586 1684539 ubuntu.go:169] provisioning hostname "addons-983119"
	I0109 00:01:56.479653 1684539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-983119
	I0109 00:01:56.500946 1684539 main.go:141] libmachine: Using SSH client type: native
	I0109 00:01:56.501657 1684539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34369 <nil> <nil>}
	I0109 00:01:56.501676 1684539 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-983119 && echo "addons-983119" | sudo tee /etc/hostname
	I0109 00:01:56.502778 1684539 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0109 00:01:59.669420 1684539 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-983119
	
	I0109 00:01:59.669541 1684539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-983119
	I0109 00:01:59.689451 1684539 main.go:141] libmachine: Using SSH client type: native
	I0109 00:01:59.689855 1684539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34369 <nil> <nil>}
	I0109 00:01:59.689878 1684539 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-983119' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-983119/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-983119' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:01:59.839790 1684539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:01:59.839818 1684539 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17830-1678586/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-1678586/.minikube}
	I0109 00:01:59.839837 1684539 ubuntu.go:177] setting up certificates
	I0109 00:01:59.839846 1684539 provision.go:83] configureAuth start
	I0109 00:01:59.839916 1684539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-983119
	I0109 00:01:59.858752 1684539 provision.go:138] copyHostCerts
	I0109 00:01:59.858837 1684539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem (1679 bytes)
	I0109 00:01:59.858979 1684539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem (1082 bytes)
	I0109 00:01:59.859053 1684539 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem (1123 bytes)
	I0109 00:01:59.859113 1684539 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca-key.pem org=jenkins.addons-983119 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-983119]
	I0109 00:02:00.153133 1684539 provision.go:172] copyRemoteCerts
	I0109 00:02:00.153236 1684539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:02:00.153284 1684539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-983119
	I0109 00:02:00.175646 1684539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/addons-983119/id_rsa Username:docker}
	I0109 00:02:00.286281 1684539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:02:00.318624 1684539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0109 00:02:00.351863 1684539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0109 00:02:00.382711 1684539 provision.go:86] duration metric: configureAuth took 542.848236ms
	I0109 00:02:00.382738 1684539 ubuntu.go:193] setting minikube options for container-runtime
	I0109 00:02:00.382934 1684539 config.go:182] Loaded profile config "addons-983119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:02:00.383056 1684539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-983119
	I0109 00:02:00.402004 1684539 main.go:141] libmachine: Using SSH client type: native
	I0109 00:02:00.402429 1684539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34369 <nil> <nil>}
	I0109 00:02:00.402496 1684539 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:02:00.670703 1684539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:02:00.670730 1684539 machine.go:91] provisioned docker machine in 4.191157314s
	I0109 00:02:00.670747 1684539 client.go:171] LocalClient.Create took 11.975525881s
	I0109 00:02:00.670765 1684539 start.go:167] duration metric: libmachine.API.Create for "addons-983119" took 11.975611084s
	I0109 00:02:00.670777 1684539 start.go:300] post-start starting for "addons-983119" (driver="docker")
	I0109 00:02:00.670788 1684539 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:02:00.670876 1684539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:02:00.670927 1684539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-983119
	I0109 00:02:00.692699 1684539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/addons-983119/id_rsa Username:docker}
	I0109 00:02:00.798212 1684539 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:02:00.802598 1684539 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0109 00:02:00.802638 1684539 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0109 00:02:00.802649 1684539 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0109 00:02:00.802656 1684539 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0109 00:02:00.802667 1684539 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-1678586/.minikube/addons for local assets ...
	I0109 00:02:00.802736 1684539 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-1678586/.minikube/files for local assets ...
	I0109 00:02:00.802766 1684539 start.go:303] post-start completed in 131.983889ms
	I0109 00:02:00.803086 1684539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-983119
	I0109 00:02:00.821518 1684539 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/config.json ...
	I0109 00:02:00.821809 1684539 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0109 00:02:00.821865 1684539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-983119
	I0109 00:02:00.840115 1684539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/addons-983119/id_rsa Username:docker}
	I0109 00:02:00.940780 1684539 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0109 00:02:00.946806 1684539 start.go:128] duration metric: createHost completed in 12.255010013s
	I0109 00:02:00.946833 1684539 start.go:83] releasing machines lock for "addons-983119", held for 12.255153513s
	I0109 00:02:00.946917 1684539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-983119
	I0109 00:02:00.970293 1684539 ssh_runner.go:195] Run: cat /version.json
	I0109 00:02:00.970348 1684539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-983119
	I0109 00:02:00.970610 1684539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:02:00.970678 1684539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-983119
	I0109 00:02:00.995745 1684539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/addons-983119/id_rsa Username:docker}
	I0109 00:02:00.999889 1684539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/addons-983119/id_rsa Username:docker}
	I0109 00:02:01.095706 1684539 ssh_runner.go:195] Run: systemctl --version
	I0109 00:02:01.234087 1684539 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:02:01.381991 1684539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0109 00:02:01.388158 1684539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:02:01.413124 1684539 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0109 00:02:01.413204 1684539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:02:01.455748 1684539 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0109 00:02:01.455773 1684539 start.go:475] detecting cgroup driver to use...
	I0109 00:02:01.455805 1684539 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0109 00:02:01.455854 1684539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:02:01.473443 1684539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:02:01.487031 1684539 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:02:01.487093 1684539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:02:01.503184 1684539 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:02:01.519887 1684539 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0109 00:02:01.617384 1684539 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:02:01.714780 1684539 docker.go:219] disabling docker service ...
	I0109 00:02:01.714851 1684539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:02:01.736754 1684539 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:02:01.751597 1684539 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:02:01.851672 1684539 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:02:01.962222 1684539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:02:01.976927 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:02:01.997156 1684539 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0109 00:02:01.997277 1684539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:02:02.009603 1684539 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0109 00:02:02.009680 1684539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:02:02.021826 1684539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:02:02.033932 1684539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:02:02.045656 1684539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:02:02.056746 1684539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:02:02.067040 1684539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:02:02.077464 1684539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:02:02.172848 1684539 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0109 00:02:02.297143 1684539 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0109 00:02:02.297225 1684539 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0109 00:02:02.301962 1684539 start.go:543] Will wait 60s for crictl version
	I0109 00:02:02.302071 1684539 ssh_runner.go:195] Run: which crictl
	I0109 00:02:02.306599 1684539 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:02:02.351855 1684539 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0109 00:02:02.351983 1684539 ssh_runner.go:195] Run: crio --version
	I0109 00:02:02.397215 1684539 ssh_runner.go:195] Run: crio --version
	I0109 00:02:02.444189 1684539 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0109 00:02:02.446243 1684539 cli_runner.go:164] Run: docker network inspect addons-983119 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0109 00:02:02.463759 1684539 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0109 00:02:02.468406 1684539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:02:02.481825 1684539 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:02:02.481894 1684539 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:02:02.547945 1684539 crio.go:496] all images are preloaded for cri-o runtime.
	I0109 00:02:02.547972 1684539 crio.go:415] Images already preloaded, skipping extraction
	I0109 00:02:02.548038 1684539 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:02:02.596252 1684539 crio.go:496] all images are preloaded for cri-o runtime.
	I0109 00:02:02.596278 1684539 cache_images.go:84] Images are preloaded, skipping loading
	I0109 00:02:02.596355 1684539 ssh_runner.go:195] Run: crio config
	I0109 00:02:02.670241 1684539 cni.go:84] Creating CNI manager for ""
	I0109 00:02:02.672513 1684539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0109 00:02:02.672553 1684539 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:02:02.672580 1684539 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-983119 NodeName:addons-983119 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0109 00:02:02.672737 1684539 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-983119"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:02:02.672805 1684539 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-983119 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-983119 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:02:02.672878 1684539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0109 00:02:02.684246 1684539 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:02:02.684331 1684539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:02:02.694915 1684539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0109 00:02:02.716171 1684539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0109 00:02:02.737487 1684539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0109 00:02:02.758819 1684539 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0109 00:02:02.763186 1684539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:02:02.776553 1684539 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119 for IP: 192.168.49.2
	I0109 00:02:02.776638 1684539 certs.go:190] acquiring lock for shared ca certs: {Name:mkd1a8a8c523b20f31a5839efb0f14edb2634692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:02:02.777366 1684539 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.key
	I0109 00:02:03.375052 1684539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt ...
	I0109 00:02:03.375082 1684539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt: {Name:mkf7146224342b3f6bc4426c2fbbb09bf69fbe2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:02:03.375803 1684539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.key ...
	I0109 00:02:03.375819 1684539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.key: {Name:mk9f0971bcd70478575912cef8f73b747e941128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:02:03.375916 1684539 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17830-1678586/.minikube/proxy-client-ca.key
	I0109 00:02:03.847951 1684539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-1678586/.minikube/proxy-client-ca.crt ...
	I0109 00:02:03.847985 1684539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/proxy-client-ca.crt: {Name:mkf36c3f144653f1a97dcc8f3ce82fcfaa9cb327 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:02:03.848177 1684539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-1678586/.minikube/proxy-client-ca.key ...
	I0109 00:02:03.848189 1684539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/proxy-client-ca.key: {Name:mka8852b63b693b73e9a4836a046beb8321a7387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:02:03.848806 1684539 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.key
	I0109 00:02:03.848824 1684539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt with IP's: []
	I0109 00:02:05.065520 1684539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt ...
	I0109 00:02:05.065551 1684539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: {Name:mkffad299d2b4be5ee1f4b6d0b84f15dce7401c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:02:05.065742 1684539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.key ...
	I0109 00:02:05.065754 1684539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.key: {Name:mk2190133b59d2a1fb208b6c6593ad3504fab425 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:02:05.065837 1684539 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/apiserver.key.dd3b5fb2
	I0109 00:02:05.065855 1684539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0109 00:02:05.589637 1684539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/apiserver.crt.dd3b5fb2 ...
	I0109 00:02:05.589671 1684539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/apiserver.crt.dd3b5fb2: {Name:mk97a1d073de080011ba764cebfaa4632c7c769a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:02:05.589851 1684539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/apiserver.key.dd3b5fb2 ...
	I0109 00:02:05.589865 1684539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/apiserver.key.dd3b5fb2: {Name:mk0029759f7b2ce1350e7b869288143abafdd88e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:02:05.589947 1684539 certs.go:337] copying /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/apiserver.crt
	I0109 00:02:05.590026 1684539 certs.go:341] copying /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/apiserver.key
	I0109 00:02:05.590080 1684539 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/proxy-client.key
	I0109 00:02:05.590101 1684539 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/proxy-client.crt with IP's: []
	I0109 00:02:05.943225 1684539 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/proxy-client.crt ...
	I0109 00:02:05.943256 1684539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/proxy-client.crt: {Name:mke3da0843e72201939032f69e01bfa7262690f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:02:05.943440 1684539 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/proxy-client.key ...
	I0109 00:02:05.943458 1684539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/proxy-client.key: {Name:mkd9c3e10074c6134052e1da43e5b7e91bd8ca8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:02:05.943642 1684539 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca-key.pem (1679 bytes)
	I0109 00:02:05.943692 1684539 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem (1082 bytes)
	I0109 00:02:05.943726 1684539 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem (1123 bytes)
	I0109 00:02:05.943762 1684539 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem (1679 bytes)
	I0109 00:02:05.944355 1684539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:02:05.972413 1684539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0109 00:02:06.001079 1684539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:02:06.032094 1684539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0109 00:02:06.062647 1684539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:02:06.092261 1684539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0109 00:02:06.121421 1684539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:02:06.151580 1684539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:02:06.179317 1684539 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:02:06.207455 1684539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:02:06.228702 1684539 ssh_runner.go:195] Run: openssl version
	I0109 00:02:06.235625 1684539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:02:06.247332 1684539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:02:06.251939 1684539 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  9 00:02 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:02:06.252006 1684539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:02:06.260547 1684539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:02:06.271850 1684539 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:02:06.276212 1684539 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0109 00:02:06.276260 1684539 kubeadm.go:404] StartCluster: {Name:addons-983119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-983119 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:02:06.276346 1684539 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0109 00:02:06.276401 1684539 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:02:06.319941 1684539 cri.go:89] found id: ""
	I0109 00:02:06.320054 1684539 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:02:06.330753 1684539 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:02:06.341220 1684539 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0109 00:02:06.341306 1684539 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:02:06.351947 1684539 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:02:06.351992 1684539 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0109 00:02:06.406887 1684539 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0109 00:02:06.407033 1684539 kubeadm.go:322] [preflight] Running pre-flight checks
	I0109 00:02:06.452788 1684539 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0109 00:02:06.452899 1684539 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0109 00:02:06.452960 1684539 kubeadm.go:322] OS: Linux
	I0109 00:02:06.453029 1684539 kubeadm.go:322] CGROUPS_CPU: enabled
	I0109 00:02:06.453105 1684539 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0109 00:02:06.453175 1684539 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0109 00:02:06.453245 1684539 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0109 00:02:06.453321 1684539 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0109 00:02:06.453390 1684539 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0109 00:02:06.453464 1684539 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0109 00:02:06.453540 1684539 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0109 00:02:06.453619 1684539 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0109 00:02:06.533074 1684539 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0109 00:02:06.533203 1684539 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0109 00:02:06.533359 1684539 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0109 00:02:06.782191 1684539 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0109 00:02:06.787034 1684539 out.go:204]   - Generating certificates and keys ...
	I0109 00:02:06.787246 1684539 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0109 00:02:06.787354 1684539 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0109 00:02:07.197928 1684539 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0109 00:02:07.752393 1684539 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0109 00:02:08.032152 1684539 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0109 00:02:08.828170 1684539 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0109 00:02:09.135340 1684539 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0109 00:02:09.135476 1684539 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-983119 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0109 00:02:09.635374 1684539 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0109 00:02:09.635685 1684539 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-983119 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0109 00:02:10.738869 1684539 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0109 00:02:11.158754 1684539 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0109 00:02:11.443348 1684539 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0109 00:02:11.443651 1684539 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0109 00:02:11.829268 1684539 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0109 00:02:12.279556 1684539 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0109 00:02:13.110982 1684539 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0109 00:02:13.526381 1684539 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0109 00:02:13.527151 1684539 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0109 00:02:13.530312 1684539 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0109 00:02:13.534026 1684539 out.go:204]   - Booting up control plane ...
	I0109 00:02:13.534121 1684539 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0109 00:02:13.534193 1684539 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0109 00:02:13.534255 1684539 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0109 00:02:13.545256 1684539 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:02:13.546075 1684539 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:02:13.546293 1684539 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0109 00:02:13.648291 1684539 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0109 00:02:20.651270 1684539 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.003056 seconds
	I0109 00:02:20.651390 1684539 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0109 00:02:20.663976 1684539 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0109 00:02:21.189786 1684539 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0109 00:02:21.189991 1684539 kubeadm.go:322] [mark-control-plane] Marking the node addons-983119 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0109 00:02:21.701261 1684539 kubeadm.go:322] [bootstrap-token] Using token: 6ghwvt.xqzmwas9z69yovvl
	I0109 00:02:21.703119 1684539 out.go:204]   - Configuring RBAC rules ...
	I0109 00:02:21.703238 1684539 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0109 00:02:21.708059 1684539 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0109 00:02:21.715801 1684539 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0109 00:02:21.719755 1684539 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0109 00:02:21.724548 1684539 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0109 00:02:21.729071 1684539 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0109 00:02:21.742378 1684539 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0109 00:02:22.000675 1684539 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0109 00:02:22.118750 1684539 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0109 00:02:22.122891 1684539 kubeadm.go:322] 
	I0109 00:02:22.122961 1684539 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0109 00:02:22.122968 1684539 kubeadm.go:322] 
	I0109 00:02:22.123043 1684539 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0109 00:02:22.123048 1684539 kubeadm.go:322] 
	I0109 00:02:22.123072 1684539 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0109 00:02:22.123127 1684539 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0109 00:02:22.123175 1684539 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0109 00:02:22.123179 1684539 kubeadm.go:322] 
	I0109 00:02:22.123230 1684539 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0109 00:02:22.123239 1684539 kubeadm.go:322] 
	I0109 00:02:22.123303 1684539 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0109 00:02:22.123309 1684539 kubeadm.go:322] 
	I0109 00:02:22.123358 1684539 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0109 00:02:22.123428 1684539 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0109 00:02:22.123492 1684539 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0109 00:02:22.123497 1684539 kubeadm.go:322] 
	I0109 00:02:22.123575 1684539 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0109 00:02:22.123655 1684539 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0109 00:02:22.123661 1684539 kubeadm.go:322] 
	I0109 00:02:22.123739 1684539 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 6ghwvt.xqzmwas9z69yovvl \
	I0109 00:02:22.123836 1684539 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2f5d2b90e0873ecdcc03ee1f37a9ff73145aa86994d578f7f9f8008617cee046 \
	I0109 00:02:22.123855 1684539 kubeadm.go:322] 	--control-plane 
	I0109 00:02:22.123860 1684539 kubeadm.go:322] 
	I0109 00:02:22.123939 1684539 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0109 00:02:22.123944 1684539 kubeadm.go:322] 
	I0109 00:02:22.124021 1684539 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 6ghwvt.xqzmwas9z69yovvl \
	I0109 00:02:22.124116 1684539 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2f5d2b90e0873ecdcc03ee1f37a9ff73145aa86994d578f7f9f8008617cee046 
	I0109 00:02:22.124621 1684539 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0109 00:02:22.124782 1684539 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0109 00:02:22.124834 1684539 cni.go:84] Creating CNI manager for ""
	I0109 00:02:22.124854 1684539 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0109 00:02:22.128685 1684539 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0109 00:02:22.130712 1684539 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0109 00:02:22.137262 1684539 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0109 00:02:22.137281 1684539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0109 00:02:22.165272 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0109 00:02:23.070055 1684539 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0109 00:02:23.070203 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:23.070286 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=addons-983119 minikube.k8s.io/updated_at=2024_01_09T00_02_23_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:23.213922 1684539 ops.go:34] apiserver oom_adj: -16
	I0109 00:02:23.214033 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:23.714086 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:24.214375 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:24.714136 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:25.214233 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:25.715043 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:26.214190 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:26.714894 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:27.214495 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:27.714653 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:28.214838 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:28.714636 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:29.214905 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:29.714134 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:30.214129 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:30.714178 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:31.214171 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:31.714151 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:32.214144 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:32.714538 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:33.214561 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:33.714173 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:34.214178 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:34.714625 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:35.214929 1684539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:02:35.397585 1684539 kubeadm.go:1088] duration metric: took 12.327427381s to wait for elevateKubeSystemPrivileges.
	I0109 00:02:35.397611 1684539 kubeadm.go:406] StartCluster complete in 29.121355711s
	I0109 00:02:35.397627 1684539 settings.go:142] acquiring lock: {Name:mk0f4be07809726b91ed42aaaa2120516a2004e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:02:35.397734 1684539 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-1678586/kubeconfig
	I0109 00:02:35.398132 1684539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/kubeconfig: {Name:mkd692fadb6f1e94cc8cf2ddbb66429fa6c0e8fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:02:35.400672 1684539 config.go:182] Loaded profile config "addons-983119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:02:35.400722 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0109 00:02:35.400946 1684539 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0109 00:02:35.401100 1684539 addons.go:69] Setting yakd=true in profile "addons-983119"
	I0109 00:02:35.401122 1684539 addons.go:237] Setting addon yakd=true in "addons-983119"
	I0109 00:02:35.401188 1684539 host.go:66] Checking if "addons-983119" exists ...
	I0109 00:02:35.401639 1684539 cli_runner.go:164] Run: docker container inspect addons-983119 --format={{.State.Status}}
	I0109 00:02:35.402112 1684539 addons.go:69] Setting cloud-spanner=true in profile "addons-983119"
	I0109 00:02:35.402130 1684539 addons.go:237] Setting addon cloud-spanner=true in "addons-983119"
	I0109 00:02:35.402168 1684539 host.go:66] Checking if "addons-983119" exists ...
	I0109 00:02:35.402587 1684539 cli_runner.go:164] Run: docker container inspect addons-983119 --format={{.State.Status}}
	I0109 00:02:35.402937 1684539 addons.go:69] Setting metrics-server=true in profile "addons-983119"
	I0109 00:02:35.402954 1684539 addons.go:237] Setting addon metrics-server=true in "addons-983119"
	I0109 00:02:35.402985 1684539 host.go:66] Checking if "addons-983119" exists ...
	I0109 00:02:35.403368 1684539 cli_runner.go:164] Run: docker container inspect addons-983119 --format={{.State.Status}}
	I0109 00:02:35.403820 1684539 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-983119"
	I0109 00:02:35.403859 1684539 addons.go:237] Setting addon nvidia-device-plugin=true in "addons-983119"
	I0109 00:02:35.403947 1684539 host.go:66] Checking if "addons-983119" exists ...
	I0109 00:02:35.404404 1684539 cli_runner.go:164] Run: docker container inspect addons-983119 --format={{.State.Status}}
	I0109 00:02:35.411983 1684539 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-983119"
	I0109 00:02:35.412169 1684539 addons.go:237] Setting addon csi-hostpath-driver=true in "addons-983119"
	I0109 00:02:35.412242 1684539 host.go:66] Checking if "addons-983119" exists ...
	I0109 00:02:35.412873 1684539 cli_runner.go:164] Run: docker container inspect addons-983119 --format={{.State.Status}}
	I0109 00:02:35.420663 1684539 addons.go:69] Setting default-storageclass=true in profile "addons-983119"
	I0109 00:02:35.426652 1684539 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-983119"
	I0109 00:02:35.426975 1684539 cli_runner.go:164] Run: docker container inspect addons-983119 --format={{.State.Status}}
	I0109 00:02:35.420679 1684539 addons.go:69] Setting gcp-auth=true in profile "addons-983119"
	I0109 00:02:35.433184 1684539 mustload.go:65] Loading cluster: addons-983119
	I0109 00:02:35.434237 1684539 config.go:182] Loaded profile config "addons-983119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:02:35.434714 1684539 cli_runner.go:164] Run: docker container inspect addons-983119 --format={{.State.Status}}
	I0109 00:02:35.420685 1684539 addons.go:69] Setting ingress=true in profile "addons-983119"
	I0109 00:02:35.462779 1684539 addons.go:237] Setting addon ingress=true in "addons-983119"
	I0109 00:02:35.462874 1684539 host.go:66] Checking if "addons-983119" exists ...
	I0109 00:02:35.463380 1684539 cli_runner.go:164] Run: docker container inspect addons-983119 --format={{.State.Status}}
	I0109 00:02:35.420689 1684539 addons.go:69] Setting ingress-dns=true in profile "addons-983119"
	I0109 00:02:35.476362 1684539 addons.go:237] Setting addon ingress-dns=true in "addons-983119"
	I0109 00:02:35.476452 1684539 host.go:66] Checking if "addons-983119" exists ...
	I0109 00:02:35.476957 1684539 cli_runner.go:164] Run: docker container inspect addons-983119 --format={{.State.Status}}
	I0109 00:02:35.420693 1684539 addons.go:69] Setting inspektor-gadget=true in profile "addons-983119"
	I0109 00:02:35.486200 1684539 addons.go:237] Setting addon inspektor-gadget=true in "addons-983119"
	I0109 00:02:35.486264 1684539 host.go:66] Checking if "addons-983119" exists ...
	I0109 00:02:35.486754 1684539 cli_runner.go:164] Run: docker container inspect addons-983119 --format={{.State.Status}}
	I0109 00:02:35.425055 1684539 addons.go:69] Setting registry=true in profile "addons-983119"
	I0109 00:02:35.425078 1684539 addons.go:69] Setting storage-provisioner=true in profile "addons-983119"
	I0109 00:02:35.425086 1684539 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-983119"
	I0109 00:02:35.425090 1684539 addons.go:69] Setting volumesnapshots=true in profile "addons-983119"
	I0109 00:02:35.510487 1684539 addons.go:237] Setting addon volumesnapshots=true in "addons-983119"
	I0109 00:02:35.510568 1684539 host.go:66] Checking if "addons-983119" exists ...
	I0109 00:02:35.515573 1684539 cli_runner.go:164] Run: docker container inspect addons-983119 --format={{.State.Status}}
	I0109 00:02:35.564930 1684539 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0109 00:02:35.574878 1684539 addons.go:429] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0109 00:02:35.574944 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0109 00:02:35.575048 1684539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-983119
	I0109 00:02:35.591908 1684539 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0109 00:02:35.597881 1684539 addons.go:429] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0109 00:02:35.597901 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0109 00:02:35.597960 1684539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-983119
	I0109 00:02:35.600152 1684539 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0109 00:02:35.593047 1684539 addons.go:237] Setting addon default-storageclass=true in "addons-983119"
	I0109 00:02:35.593070 1684539 addons.go:237] Setting addon storage-provisioner=true in "addons-983119"
	I0109 00:02:35.593079 1684539 addons.go:237] Setting addon registry=true in "addons-983119"
	I0109 00:02:35.593091 1684539 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-983119"
	I0109 00:02:35.602933 1684539 cli_runner.go:164] Run: docker container inspect addons-983119 --format={{.State.Status}}
	I0109 00:02:35.607364 1684539 host.go:66] Checking if "addons-983119" exists ...
	I0109 00:02:35.607863 1684539 cli_runner.go:164] Run: docker container inspect addons-983119 --format={{.State.Status}}
	I0109 00:02:35.647215 1684539 host.go:66] Checking if "addons-983119" exists ...
	I0109 00:02:35.647688 1684539 cli_runner.go:164] Run: docker container inspect addons-983119 --format={{.State.Status}}
	I0109 00:02:35.651474 1684539 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0109 00:02:35.651496 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0109 00:02:35.651647 1684539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-983119
	I0109 00:02:35.664600 1684539 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0109 00:02:35.694344 1684539 addons.go:429] installing /etc/kubernetes/addons/deployment.yaml
	I0109 00:02:35.694373 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0109 00:02:35.694463 1684539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-983119
	I0109 00:02:35.702003 1684539 host.go:66] Checking if "addons-983119" exists ...
	I0109 00:02:35.705809 1684539 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0109 00:02:35.665924 1684539 host.go:66] Checking if "addons-983119" exists ...
	I0109 00:02:35.707882 1684539 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0109 00:02:35.710090 1684539 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0109 00:02:35.708093 1684539 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0109 00:02:35.708697 1684539 cli_runner.go:164] Run: docker container inspect addons-983119 --format={{.State.Status}}
	I0109 00:02:35.712462 1684539 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0109 00:02:35.715435 1684539 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0109 00:02:35.718051 1684539 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0109 00:02:35.718071 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0109 00:02:35.718158 1684539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-983119
	I0109 00:02:35.749706 1684539 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0109 00:02:35.754682 1684539 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0109 00:02:35.750626 1684539 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0109 00:02:35.751814 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0109 00:02:35.779676 1684539 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0109 00:02:35.784337 1684539 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0109 00:02:35.784475 1684539 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0109 00:02:35.793733 1684539 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0109 00:02:35.784719 1684539 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0109 00:02:35.791475 1684539 addons.go:429] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0109 00:02:35.801053 1684539 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0109 00:02:35.801120 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0109 00:02:35.801228 1684539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-983119
	I0109 00:02:35.812723 1684539 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0109 00:02:35.797247 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0109 00:02:35.797255 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0109 00:02:35.822568 1684539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-983119
	I0109 00:02:35.843092 1684539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/addons-983119/id_rsa Username:docker}
	I0109 00:02:35.844155 1684539 addons.go:429] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0109 00:02:35.844172 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0109 00:02:35.844228 1684539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-983119
	I0109 00:02:35.868242 1684539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-983119
	I0109 00:02:35.889597 1684539 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0109 00:02:35.889618 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0109 00:02:35.889678 1684539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-983119
	I0109 00:02:35.919424 1684539 addons.go:237] Setting addon storage-provisioner-rancher=true in "addons-983119"
	I0109 00:02:35.919519 1684539 host.go:66] Checking if "addons-983119" exists ...
	I0109 00:02:35.919982 1684539 cli_runner.go:164] Run: docker container inspect addons-983119 --format={{.State.Status}}
	I0109 00:02:35.937972 1684539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/addons-983119/id_rsa Username:docker}
	I0109 00:02:35.967453 1684539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/addons-983119/id_rsa Username:docker}
	I0109 00:02:35.989223 1684539 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:02:35.982984 1684539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/addons-983119/id_rsa Username:docker}
	I0109 00:02:35.995463 1684539 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:02:35.995478 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0109 00:02:35.995534 1684539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-983119
	I0109 00:02:35.998572 1684539 out.go:177]   - Using image docker.io/registry:2.8.3
	I0109 00:02:36.002763 1684539 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0109 00:02:36.005346 1684539 addons.go:429] installing /etc/kubernetes/addons/registry-rc.yaml
	I0109 00:02:36.005368 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0109 00:02:36.005437 1684539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-983119
	I0109 00:02:36.004956 1684539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/addons-983119/id_rsa Username:docker}
	I0109 00:02:36.068725 1684539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/addons-983119/id_rsa Username:docker}
	I0109 00:02:36.096738 1684539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/addons-983119/id_rsa Username:docker}
	I0109 00:02:36.098647 1684539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/addons-983119/id_rsa Username:docker}
	I0109 00:02:36.136198 1684539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/addons-983119/id_rsa Username:docker}
	I0109 00:02:36.148398 1684539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/addons-983119/id_rsa Username:docker}
	I0109 00:02:36.159251 1684539 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0109 00:02:36.156301 1684539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/addons-983119/id_rsa Username:docker}
	I0109 00:02:36.163697 1684539 out.go:177]   - Using image docker.io/busybox:stable
	I0109 00:02:36.165880 1684539 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0109 00:02:36.165899 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0109 00:02:36.165966 1684539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-983119
	I0109 00:02:36.181266 1684539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/addons-983119/id_rsa Username:docker}
	I0109 00:02:36.214869 1684539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/addons-983119/id_rsa Username:docker}
	W0109 00:02:36.216396 1684539 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0109 00:02:36.216428 1684539 retry.go:31] will retry after 296.308987ms: ssh: handshake failed: EOF
	I0109 00:02:36.450169 1684539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0109 00:02:36.474254 1684539 addons.go:429] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0109 00:02:36.474273 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0109 00:02:36.562355 1684539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0109 00:02:36.572538 1684539 addons.go:429] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0109 00:02:36.572608 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0109 00:02:36.576314 1684539 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0109 00:02:36.576381 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0109 00:02:36.605138 1684539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:02:36.608905 1684539 addons.go:429] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0109 00:02:36.608970 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0109 00:02:36.624846 1684539 addons.go:429] installing /etc/kubernetes/addons/registry-svc.yaml
	I0109 00:02:36.624918 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0109 00:02:36.631083 1684539 addons.go:429] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0109 00:02:36.631151 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0109 00:02:36.674738 1684539 addons.go:429] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0109 00:02:36.674808 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0109 00:02:36.681013 1684539 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0109 00:02:36.681091 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0109 00:02:36.692421 1684539 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0109 00:02:36.692497 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0109 00:02:36.701309 1684539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0109 00:02:36.704476 1684539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0109 00:02:36.712789 1684539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0109 00:02:36.721005 1684539 addons.go:429] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0109 00:02:36.721075 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0109 00:02:36.782499 1684539 addons.go:429] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0109 00:02:36.782570 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0109 00:02:36.812276 1684539 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0109 00:02:36.812348 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0109 00:02:36.818466 1684539 addons.go:429] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0109 00:02:36.818533 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0109 00:02:36.824714 1684539 addons.go:429] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0109 00:02:36.824785 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0109 00:02:36.860217 1684539 addons.go:429] installing /etc/kubernetes/addons/ig-role.yaml
	I0109 00:02:36.860286 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0109 00:02:36.872827 1684539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0109 00:02:36.902377 1684539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0109 00:02:36.939996 1684539 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0109 00:02:36.940019 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0109 00:02:36.962899 1684539 addons.go:429] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0109 00:02:36.962922 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0109 00:02:36.973210 1684539 addons.go:429] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0109 00:02:36.973282 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0109 00:02:37.043571 1684539 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-983119" context rescaled to 1 replicas
	I0109 00:02:37.043656 1684539 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:02:37.046578 1684539 out.go:177] * Verifying Kubernetes components...
	I0109 00:02:37.048768 1684539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:02:37.088592 1684539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0109 00:02:37.104892 1684539 addons.go:429] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0109 00:02:37.104963 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0109 00:02:37.166029 1684539 addons.go:429] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0109 00:02:37.166097 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0109 00:02:37.181255 1684539 addons.go:429] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0109 00:02:37.181325 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0109 00:02:37.188712 1684539 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:02:37.188782 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0109 00:02:37.349156 1684539 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0109 00:02:37.349227 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0109 00:02:37.383212 1684539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0109 00:02:37.396387 1684539 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0109 00:02:37.396457 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0109 00:02:37.433891 1684539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0109 00:02:37.555291 1684539 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0109 00:02:37.555367 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0109 00:02:37.575465 1684539 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0109 00:02:37.575539 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0109 00:02:37.714850 1684539 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0109 00:02:37.714923 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0109 00:02:37.720426 1684539 addons.go:429] installing /etc/kubernetes/addons/ig-crd.yaml
	I0109 00:02:37.720495 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0109 00:02:37.795072 1684539 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0109 00:02:37.795095 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0109 00:02:37.797630 1684539 addons.go:429] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0109 00:02:37.797652 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0109 00:02:37.852430 1684539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0109 00:02:37.886712 1684539 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0109 00:02:37.886753 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0109 00:02:38.019625 1684539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0109 00:02:39.634059 1684539 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.857807286s)
	I0109 00:02:39.634089 1684539 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0109 00:02:40.275766 1684539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.82551674s)
	I0109 00:02:40.870344 1684539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.307888254s)
	I0109 00:02:41.498919 1684539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.797512599s)
	I0109 00:02:41.499077 1684539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.893879971s)
	I0109 00:02:42.052557 1684539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.348010524s)
	I0109 00:02:42.052643 1684539 addons.go:473] Verifying addon ingress=true in "addons-983119"
	I0109 00:02:42.054896 1684539 out.go:177] * Verifying ingress addon...
	I0109 00:02:42.052885 1684539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.340036244s)
	I0109 00:02:42.053029 1684539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.180173482s)
	I0109 00:02:42.053067 1684539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.150667657s)
	I0109 00:02:42.053083 1684539 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.004264065s)
	I0109 00:02:42.053124 1684539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.964455988s)
	I0109 00:02:42.053220 1684539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.669936009s)
	I0109 00:02:42.053326 1684539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.61936564s)
	I0109 00:02:42.053423 1684539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.200963382s)
	I0109 00:02:42.055291 1684539 addons.go:473] Verifying addon registry=true in "addons-983119"
	W0109 00:02:42.055336 1684539 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0109 00:02:42.055352 1684539 addons.go:473] Verifying addon metrics-server=true in "addons-983119"
	I0109 00:02:42.056197 1684539 node_ready.go:35] waiting up to 6m0s for node "addons-983119" to be "Ready" ...
	I0109 00:02:42.059057 1684539 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0109 00:02:42.061082 1684539 out.go:177] * Verifying registry addon...
	I0109 00:02:42.061315 1684539 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-983119 service yakd-dashboard -n yakd-dashboard
	
	
	I0109 00:02:42.065615 1684539 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0109 00:02:42.061330 1684539 retry.go:31] will retry after 249.880943ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0109 00:02:42.096012 1684539 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0109 00:02:42.096048 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:42.098543 1684539 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0109 00:02:42.098577 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:42.317750 1684539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0109 00:02:42.431451 1684539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.411757164s)
	I0109 00:02:42.431485 1684539 addons.go:473] Verifying addon csi-hostpath-driver=true in "addons-983119"
	I0109 00:02:42.433764 1684539 out.go:177] * Verifying csi-hostpath-driver addon...
	I0109 00:02:42.437158 1684539 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0109 00:02:42.476462 1684539 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0109 00:02:42.476490 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:42.588353 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:42.589571 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:42.943919 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:43.076476 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:43.082254 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:43.448265 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:43.616314 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:43.644025 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:43.893078 1684539 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.575241356s)
	I0109 00:02:43.943437 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:44.066327 1684539 node_ready.go:58] node "addons-983119" has status "Ready":"False"
	I0109 00:02:44.067770 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:44.072399 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:44.363368 1684539 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0109 00:02:44.363457 1684539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-983119
	I0109 00:02:44.396810 1684539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/addons-983119/id_rsa Username:docker}
	I0109 00:02:44.441782 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:44.571477 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:44.577349 1684539 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0109 00:02:44.585121 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:44.653956 1684539 addons.go:237] Setting addon gcp-auth=true in "addons-983119"
	I0109 00:02:44.654022 1684539 host.go:66] Checking if "addons-983119" exists ...
	I0109 00:02:44.654554 1684539 cli_runner.go:164] Run: docker container inspect addons-983119 --format={{.State.Status}}
	I0109 00:02:44.724101 1684539 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0109 00:02:44.724152 1684539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-983119
	I0109 00:02:44.760217 1684539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/addons-983119/id_rsa Username:docker}
	I0109 00:02:44.923350 1684539 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0109 00:02:44.926238 1684539 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0109 00:02:44.928683 1684539 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0109 00:02:44.928707 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0109 00:02:44.946142 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:44.987742 1684539 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0109 00:02:44.987768 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0109 00:02:45.069516 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:45.073854 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:45.086560 1684539 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0109 00:02:45.086589 1684539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0109 00:02:45.155480 1684539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0109 00:02:45.444259 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:45.572652 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:45.582666 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:45.950260 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:46.077096 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:46.077709 1684539 node_ready.go:58] node "addons-983119" has status "Ready":"False"
	I0109 00:02:46.100345 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:46.135499 1684539 addons.go:473] Verifying addon gcp-auth=true in "addons-983119"
	I0109 00:02:46.137676 1684539 out.go:177] * Verifying gcp-auth addon...
	I0109 00:02:46.140760 1684539 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0109 00:02:46.150828 1684539 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0109 00:02:46.150849 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:46.442478 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:46.567018 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:46.572507 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:46.645822 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:46.941647 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:47.066938 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:47.071346 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:47.144826 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:47.442718 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:47.584642 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:47.585574 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:47.645506 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:47.945512 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:48.070709 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:48.073631 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:48.145192 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:48.442000 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:48.565484 1684539 node_ready.go:58] node "addons-983119" has status "Ready":"False"
	I0109 00:02:48.566644 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:48.571962 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:48.645144 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:48.942304 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:49.066148 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:49.075929 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:49.144201 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:49.442814 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:49.566748 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:49.571580 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:49.645157 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:49.942212 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:50.079227 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:50.080158 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:50.144902 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:50.442422 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:50.566288 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:50.571544 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:50.644951 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:50.941448 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:51.065123 1684539 node_ready.go:58] node "addons-983119" has status "Ready":"False"
	I0109 00:02:51.066673 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:51.071454 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:51.144570 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:51.441509 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:51.566736 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:51.571695 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:51.644070 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:51.942012 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:52.067840 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:52.071578 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:52.144527 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:52.443211 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:52.565784 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:52.571741 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:52.645055 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:52.941407 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:53.065629 1684539 node_ready.go:58] node "addons-983119" has status "Ready":"False"
	I0109 00:02:53.066461 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:53.070956 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:53.144303 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:53.441651 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:53.566261 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:53.572099 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:53.644449 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:53.941659 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:54.067348 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:54.072205 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:54.144911 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:54.441930 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:54.566716 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:54.571176 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:54.644580 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:54.941418 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:55.066634 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:55.071211 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:55.144387 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:55.442095 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:55.564636 1684539 node_ready.go:58] node "addons-983119" has status "Ready":"False"
	I0109 00:02:55.566077 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:55.571644 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:55.644685 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:55.942306 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:56.066614 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:56.071952 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:56.145232 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:56.441563 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:56.566279 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:56.571595 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:56.645012 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:56.941333 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:57.066839 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:57.072304 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:57.144425 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:57.441620 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:57.565420 1684539 node_ready.go:58] node "addons-983119" has status "Ready":"False"
	I0109 00:02:57.566255 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:57.572033 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:57.644537 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:57.941910 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:58.065784 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:58.072354 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:58.144388 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:58.441492 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:58.566586 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:58.571528 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:58.644623 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:58.941610 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:59.068154 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:59.071480 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:59.144504 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:59.441863 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:02:59.566428 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:02:59.570904 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:02:59.644921 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:02:59.942161 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:00.065567 1684539 node_ready.go:58] node "addons-983119" has status "Ready":"False"
	I0109 00:03:00.067244 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:00.073001 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:00.144943 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:00.442531 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:00.566648 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:00.571308 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:00.644330 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:00.941208 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:01.066508 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:01.071473 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:01.144474 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:01.441547 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:01.565816 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:01.571897 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:01.645056 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:01.942350 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:02.065858 1684539 node_ready.go:58] node "addons-983119" has status "Ready":"False"
	I0109 00:03:02.066212 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:02.071614 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:02.144719 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:02.443565 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:02.565983 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:02.571993 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:02.644506 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:02.941496 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:03.066664 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:03.071512 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:03.144931 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:03.441477 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:03.566207 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:03.571993 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:03.644401 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:03.941996 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:04.066319 1684539 node_ready.go:58] node "addons-983119" has status "Ready":"False"
	I0109 00:03:04.068120 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:04.071745 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:04.144697 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:04.442900 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:04.566585 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:04.571479 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:04.645143 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:04.941972 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:05.066966 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:05.072053 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:05.148228 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:05.463332 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:05.566121 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:05.571945 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:05.644591 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:05.941513 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:06.065712 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:06.066345 1684539 node_ready.go:58] node "addons-983119" has status "Ready":"False"
	I0109 00:03:06.071887 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:06.144954 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:06.450104 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:06.565774 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:06.571957 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:06.644374 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:06.941845 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:07.066735 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:07.071609 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:07.145145 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:07.444279 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:07.566217 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:07.571350 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:07.644356 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:07.941369 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:08.066461 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:08.066956 1684539 node_ready.go:58] node "addons-983119" has status "Ready":"False"
	I0109 00:03:08.071326 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:08.148255 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:08.458512 1684539 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0109 00:03:08.458538 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:08.569033 1684539 node_ready.go:49] node "addons-983119" has status "Ready":"True"
	I0109 00:03:08.569061 1684539 node_ready.go:38] duration metric: took 26.507711402s waiting for node "addons-983119" to be "Ready" ...
	I0109 00:03:08.569072 1684539 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:03:08.583850 1684539 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0109 00:03:08.583874 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:08.585602 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:08.590507 1684539 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vzg2p" in "kube-system" namespace to be "Ready" ...
	I0109 00:03:08.655201 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:08.945146 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:09.091011 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:09.093400 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:09.161150 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:09.442778 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:09.567090 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:09.572620 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:09.597731 1684539 pod_ready.go:92] pod "coredns-5dd5756b68-vzg2p" in "kube-system" namespace has status "Ready":"True"
	I0109 00:03:09.597814 1684539 pod_ready.go:81] duration metric: took 1.007271562s waiting for pod "coredns-5dd5756b68-vzg2p" in "kube-system" namespace to be "Ready" ...
	I0109 00:03:09.597850 1684539 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-983119" in "kube-system" namespace to be "Ready" ...
	I0109 00:03:09.604962 1684539 pod_ready.go:92] pod "etcd-addons-983119" in "kube-system" namespace has status "Ready":"True"
	I0109 00:03:09.605031 1684539 pod_ready.go:81] duration metric: took 7.147533ms waiting for pod "etcd-addons-983119" in "kube-system" namespace to be "Ready" ...
	I0109 00:03:09.605065 1684539 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-983119" in "kube-system" namespace to be "Ready" ...
	I0109 00:03:09.616222 1684539 pod_ready.go:92] pod "kube-apiserver-addons-983119" in "kube-system" namespace has status "Ready":"True"
	I0109 00:03:09.616290 1684539 pod_ready.go:81] duration metric: took 11.203207ms waiting for pod "kube-apiserver-addons-983119" in "kube-system" namespace to be "Ready" ...
	I0109 00:03:09.616317 1684539 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-983119" in "kube-system" namespace to be "Ready" ...
	I0109 00:03:09.643616 1684539 pod_ready.go:92] pod "kube-controller-manager-addons-983119" in "kube-system" namespace has status "Ready":"True"
	I0109 00:03:09.643683 1684539 pod_ready.go:81] duration metric: took 27.338011ms waiting for pod "kube-controller-manager-addons-983119" in "kube-system" namespace to be "Ready" ...
	I0109 00:03:09.643721 1684539 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4864k" in "kube-system" namespace to be "Ready" ...
	I0109 00:03:09.648547 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:09.773257 1684539 pod_ready.go:92] pod "kube-proxy-4864k" in "kube-system" namespace has status "Ready":"True"
	I0109 00:03:09.773283 1684539 pod_ready.go:81] duration metric: took 129.54157ms waiting for pod "kube-proxy-4864k" in "kube-system" namespace to be "Ready" ...
	I0109 00:03:09.773295 1684539 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-983119" in "kube-system" namespace to be "Ready" ...
	I0109 00:03:09.953248 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:10.076373 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:10.076985 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:10.146188 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:10.166314 1684539 pod_ready.go:92] pod "kube-scheduler-addons-983119" in "kube-system" namespace has status "Ready":"True"
	I0109 00:03:10.166384 1684539 pod_ready.go:81] duration metric: took 393.080814ms waiting for pod "kube-scheduler-addons-983119" in "kube-system" namespace to be "Ready" ...
	I0109 00:03:10.166413 1684539 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-wvvsq" in "kube-system" namespace to be "Ready" ...
	I0109 00:03:10.443170 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:10.566811 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:10.572125 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:10.644758 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:10.945039 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:11.065666 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:11.073878 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:11.145183 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:11.443666 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:11.569508 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:11.575115 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:11.645333 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:11.966685 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:12.088681 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:12.103907 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:12.145870 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:12.173318 1684539 pod_ready.go:102] pod "metrics-server-7c66d45ddc-wvvsq" in "kube-system" namespace has status "Ready":"False"
	I0109 00:03:12.448638 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:12.567233 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:12.578419 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:12.645657 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:12.943876 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:13.067224 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:13.085481 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:13.146487 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:13.451850 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:13.575518 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:13.583191 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:13.645269 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:13.943755 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:14.068209 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:14.072769 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:14.144742 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:14.174108 1684539 pod_ready.go:102] pod "metrics-server-7c66d45ddc-wvvsq" in "kube-system" namespace has status "Ready":"False"
	I0109 00:03:14.443575 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:14.580494 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:14.587544 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:14.645202 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:14.673902 1684539 pod_ready.go:92] pod "metrics-server-7c66d45ddc-wvvsq" in "kube-system" namespace has status "Ready":"True"
	I0109 00:03:14.673930 1684539 pod_ready.go:81] duration metric: took 4.507496817s waiting for pod "metrics-server-7c66d45ddc-wvvsq" in "kube-system" namespace to be "Ready" ...
	I0109 00:03:14.673943 1684539 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-2qj49" in "kube-system" namespace to be "Ready" ...
	I0109 00:03:14.945484 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:15.069183 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:15.079619 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:15.145311 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:15.444326 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:15.568686 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:15.578335 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:15.644887 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:15.944691 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:16.068415 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:16.086113 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:16.154095 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:16.446786 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:16.567723 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:16.574921 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:16.644543 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:16.681816 1684539 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2qj49" in "kube-system" namespace has status "Ready":"False"
	I0109 00:03:16.959195 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:17.066140 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:17.073734 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:17.144679 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:17.444783 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:17.567279 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:17.578059 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:17.646375 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:17.943425 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:18.066779 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:18.073175 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:18.145159 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:18.450364 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:18.566246 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:18.572945 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:18.645243 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:18.687644 1684539 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2qj49" in "kube-system" namespace has status "Ready":"False"
	I0109 00:03:18.943370 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:19.066119 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:19.072605 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:19.145257 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:19.442740 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:19.566459 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:19.571839 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:19.644601 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:19.942975 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:20.071176 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:20.080882 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:20.147474 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:20.442975 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:20.566245 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:20.573143 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:20.645036 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:20.945608 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:21.066175 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:21.072645 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:21.144564 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:21.180770 1684539 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2qj49" in "kube-system" namespace has status "Ready":"False"
	I0109 00:03:21.443125 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:21.569346 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:21.572328 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:21.644822 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:21.942571 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:22.066524 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:22.079576 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:22.147838 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:22.443887 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:22.566412 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:22.572617 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:22.645141 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:22.944217 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:23.065940 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:23.076135 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:23.144765 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:23.180804 1684539 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2qj49" in "kube-system" namespace has status "Ready":"False"
	I0109 00:03:23.443138 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:23.566204 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:23.572606 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:23.647860 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:23.943337 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:24.068250 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:24.074133 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:24.145776 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:24.445790 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:24.566309 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:24.574377 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:24.644503 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:24.943725 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:25.067216 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:25.075498 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:25.144903 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:25.183484 1684539 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2qj49" in "kube-system" namespace has status "Ready":"False"
	I0109 00:03:25.444394 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:25.565918 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:25.572462 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:25.644449 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:25.944444 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:26.066633 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:26.076908 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:26.145314 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:26.443627 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:26.566713 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:26.573127 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:26.645150 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:26.946507 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:27.066306 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:27.072816 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:27.148678 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:27.444464 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:27.566709 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:27.574350 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:27.645177 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:27.683407 1684539 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2qj49" in "kube-system" namespace has status "Ready":"False"
	I0109 00:03:27.944532 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:28.067062 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:28.074659 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:28.146836 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:28.444271 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:28.566054 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:28.573121 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:28.645424 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:28.945379 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:29.065987 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:29.072433 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:29.144457 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:29.443304 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:29.565743 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:29.572384 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:29.644393 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:29.692295 1684539 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2qj49" in "kube-system" namespace has status "Ready":"False"
	I0109 00:03:29.944926 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:30.067753 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:30.089478 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:30.145763 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:30.445389 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:30.592080 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:30.622474 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:30.645858 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:30.943644 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:31.067009 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:31.072660 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:31.145288 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:31.443086 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:31.566078 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:31.573943 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:31.644733 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:31.943241 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:32.066979 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:32.071742 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:32.144737 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:32.181223 1684539 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2qj49" in "kube-system" namespace has status "Ready":"False"
	I0109 00:03:32.445010 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:32.566844 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:32.572221 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:32.646704 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:32.944680 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:33.066556 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:33.073212 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:33.145148 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:33.445576 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:33.570149 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:33.576001 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:33.645536 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:33.943793 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:34.065880 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:34.073563 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:34.144503 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:34.442631 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:34.566079 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:34.572736 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:34.644666 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:34.680184 1684539 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2qj49" in "kube-system" namespace has status "Ready":"False"
	I0109 00:03:34.942903 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:35.066676 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:35.072411 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:35.144568 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:35.442759 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:35.565703 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:35.572050 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:35.644985 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:35.943717 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:36.067963 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:36.075249 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:36.145240 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:36.444185 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:36.567060 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:36.572453 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:36.645091 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:36.684870 1684539 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2qj49" in "kube-system" namespace has status "Ready":"False"
	I0109 00:03:36.944686 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:37.066634 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:37.072696 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:37.144652 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:37.449641 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:37.568447 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:37.575416 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:37.645111 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:37.944197 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:38.066524 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:38.072400 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:38.145875 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:38.448790 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:38.566680 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:38.572171 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:38.645502 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:38.942714 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:39.068302 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:39.075624 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:39.145630 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:39.180399 1684539 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2qj49" in "kube-system" namespace has status "Ready":"False"
	I0109 00:03:39.442717 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:39.566093 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:39.573941 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:39.644932 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:39.943840 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:40.066733 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:40.073281 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:40.145200 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:40.181901 1684539 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-2qj49" in "kube-system" namespace has status "Ready":"True"
	I0109 00:03:40.181924 1684539 pod_ready.go:81] duration metric: took 25.507972663s waiting for pod "nvidia-device-plugin-daemonset-2qj49" in "kube-system" namespace to be "Ready" ...
	I0109 00:03:40.181990 1684539 pod_ready.go:38] duration metric: took 31.612904808s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:03:40.182022 1684539 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:03:40.182071 1684539 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:03:40.182160 1684539 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:03:40.247665 1684539 cri.go:89] found id: "62783a6bd185f396cb47eb0c07714d63bd5492ab074f91c3541bdb334eed0f96"
	I0109 00:03:40.247688 1684539 cri.go:89] found id: ""
	I0109 00:03:40.247697 1684539 logs.go:284] 1 containers: [62783a6bd185f396cb47eb0c07714d63bd5492ab074f91c3541bdb334eed0f96]
	I0109 00:03:40.247767 1684539 ssh_runner.go:195] Run: which crictl
	I0109 00:03:40.252676 1684539 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:03:40.252762 1684539 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:03:40.301587 1684539 cri.go:89] found id: "1011042feffcab2580c6186d821b65221c50d7197c9277d66b72a69e20117153"
	I0109 00:03:40.301611 1684539 cri.go:89] found id: ""
	I0109 00:03:40.301619 1684539 logs.go:284] 1 containers: [1011042feffcab2580c6186d821b65221c50d7197c9277d66b72a69e20117153]
	I0109 00:03:40.301697 1684539 ssh_runner.go:195] Run: which crictl
	I0109 00:03:40.306788 1684539 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:03:40.306886 1684539 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:03:40.356240 1684539 cri.go:89] found id: "b7a4ca1bd2b68b45e2e49096733a1a921f7244222487f8532eedf55bf9fc310a"
	I0109 00:03:40.356264 1684539 cri.go:89] found id: ""
	I0109 00:03:40.356272 1684539 logs.go:284] 1 containers: [b7a4ca1bd2b68b45e2e49096733a1a921f7244222487f8532eedf55bf9fc310a]
	I0109 00:03:40.356354 1684539 ssh_runner.go:195] Run: which crictl
	I0109 00:03:40.361282 1684539 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:03:40.361381 1684539 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:03:40.421087 1684539 cri.go:89] found id: "ca546a071dfbff1b1fc40b95cfb1af00bc76cb73e484276c72fa0b1f9c46aeec"
	I0109 00:03:40.421112 1684539 cri.go:89] found id: ""
	I0109 00:03:40.421120 1684539 logs.go:284] 1 containers: [ca546a071dfbff1b1fc40b95cfb1af00bc76cb73e484276c72fa0b1f9c46aeec]
	I0109 00:03:40.421202 1684539 ssh_runner.go:195] Run: which crictl
	I0109 00:03:40.426079 1684539 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:03:40.426175 1684539 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:03:40.444703 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:40.504648 1684539 cri.go:89] found id: "c83f5ce969abb821559581b8daa7137b2e93fabc70e7eb5fd6bd59e8f3a5d791"
	I0109 00:03:40.504672 1684539 cri.go:89] found id: ""
	I0109 00:03:40.504681 1684539 logs.go:284] 1 containers: [c83f5ce969abb821559581b8daa7137b2e93fabc70e7eb5fd6bd59e8f3a5d791]
	I0109 00:03:40.504754 1684539 ssh_runner.go:195] Run: which crictl
	I0109 00:03:40.510783 1684539 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:03:40.510873 1684539 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:03:40.566572 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:40.574904 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:40.582409 1684539 cri.go:89] found id: "a57018d4c10a853439070e81b0334c348d0a675fb9830a2b3876094408d092db"
	I0109 00:03:40.582463 1684539 cri.go:89] found id: ""
	I0109 00:03:40.582471 1684539 logs.go:284] 1 containers: [a57018d4c10a853439070e81b0334c348d0a675fb9830a2b3876094408d092db]
	I0109 00:03:40.582563 1684539 ssh_runner.go:195] Run: which crictl
	I0109 00:03:40.587380 1684539 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:03:40.587467 1684539 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:03:40.636630 1684539 cri.go:89] found id: "b0dd2e239ba472bd999437617e44545f1256b50285fac10dbda59ac5e2f56bcc"
	I0109 00:03:40.636654 1684539 cri.go:89] found id: ""
	I0109 00:03:40.636663 1684539 logs.go:284] 1 containers: [b0dd2e239ba472bd999437617e44545f1256b50285fac10dbda59ac5e2f56bcc]
	I0109 00:03:40.636742 1684539 ssh_runner.go:195] Run: which crictl
	I0109 00:03:40.641855 1684539 logs.go:123] Gathering logs for kindnet [b0dd2e239ba472bd999437617e44545f1256b50285fac10dbda59ac5e2f56bcc] ...
	I0109 00:03:40.641879 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0dd2e239ba472bd999437617e44545f1256b50285fac10dbda59ac5e2f56bcc"
	I0109 00:03:40.645610 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:40.699156 1684539 logs.go:123] Gathering logs for container status ...
	I0109 00:03:40.699186 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:03:40.823047 1684539 logs.go:123] Gathering logs for kubelet ...
	I0109 00:03:40.823125 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:03:40.890579 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.426876    1347 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-983119" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-983119' and this object
	W0109 00:03:40.890855 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.426965    1347 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-983119" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-983119' and this object
	W0109 00:03:40.891058 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.427144    1347 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-983119" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-983119' and this object
	W0109 00:03:40.891277 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.427169    1347 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-983119" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-983119' and this object
	W0109 00:03:40.891488 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.428403    1347 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-983119" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-983119' and this object
	W0109 00:03:40.891724 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.428439    1347 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-983119" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-983119' and this object
	W0109 00:03:40.891931 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.428481    1347 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-983119' and this object
	W0109 00:03:40.892159 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.428493    1347 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-983119' and this object
	W0109 00:03:40.892370 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.428524    1347 reflector.go:535] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-983119' and this object
	W0109 00:03:40.892612 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.428533    1347 reflector.go:147] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-983119' and this object
	W0109 00:03:40.892823 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.428565    1347 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-983119' and this object
	W0109 00:03:40.893054 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.428574    1347 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-983119' and this object
	W0109 00:03:40.893256 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.428612    1347 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-983119' and this object
	W0109 00:03:40.893481 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.428621    1347 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-983119' and this object
	I0109 00:03:40.919465 1684539 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:03:40.919493 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:03:40.944433 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:41.065384 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:41.073196 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:41.103097 1684539 logs.go:123] Gathering logs for coredns [b7a4ca1bd2b68b45e2e49096733a1a921f7244222487f8532eedf55bf9fc310a] ...
	I0109 00:03:41.103130 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7a4ca1bd2b68b45e2e49096733a1a921f7244222487f8532eedf55bf9fc310a"
	I0109 00:03:41.144840 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:41.153680 1684539 logs.go:123] Gathering logs for kube-scheduler [ca546a071dfbff1b1fc40b95cfb1af00bc76cb73e484276c72fa0b1f9c46aeec] ...
	I0109 00:03:41.153707 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca546a071dfbff1b1fc40b95cfb1af00bc76cb73e484276c72fa0b1f9c46aeec"
	I0109 00:03:41.204759 1684539 logs.go:123] Gathering logs for kube-proxy [c83f5ce969abb821559581b8daa7137b2e93fabc70e7eb5fd6bd59e8f3a5d791] ...
	I0109 00:03:41.204790 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c83f5ce969abb821559581b8daa7137b2e93fabc70e7eb5fd6bd59e8f3a5d791"
	I0109 00:03:41.255518 1684539 logs.go:123] Gathering logs for kube-controller-manager [a57018d4c10a853439070e81b0334c348d0a675fb9830a2b3876094408d092db] ...
	I0109 00:03:41.255545 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a57018d4c10a853439070e81b0334c348d0a675fb9830a2b3876094408d092db"
	I0109 00:03:41.325275 1684539 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:03:41.325308 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:03:41.416849 1684539 logs.go:123] Gathering logs for dmesg ...
	I0109 00:03:41.416884 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:03:41.437790 1684539 logs.go:123] Gathering logs for kube-apiserver [62783a6bd185f396cb47eb0c07714d63bd5492ab074f91c3541bdb334eed0f96] ...
	I0109 00:03:41.437819 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62783a6bd185f396cb47eb0c07714d63bd5492ab074f91c3541bdb334eed0f96"
	I0109 00:03:41.444464 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:41.506110 1684539 logs.go:123] Gathering logs for etcd [1011042feffcab2580c6186d821b65221c50d7197c9277d66b72a69e20117153] ...
	I0109 00:03:41.506145 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1011042feffcab2580c6186d821b65221c50d7197c9277d66b72a69e20117153"
	I0109 00:03:41.562156 1684539 out.go:309] Setting ErrFile to fd 2...
	I0109 00:03:41.562184 1684539 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:03:41.562240 1684539 out.go:239] X Problems detected in kubelet:
	W0109 00:03:41.562254 1684539 out.go:239]   Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.428533    1347 reflector.go:147] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-983119' and this object
	W0109 00:03:41.562261 1684539 out.go:239]   Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.428565    1347 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-983119' and this object
	W0109 00:03:41.562274 1684539 out.go:239]   Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.428574    1347 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-983119' and this object
	W0109 00:03:41.562282 1684539 out.go:239]   Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.428612    1347 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-983119' and this object
	W0109 00:03:41.562292 1684539 out.go:239]   Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.428621    1347 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-983119' and this object
	I0109 00:03:41.562298 1684539 out.go:309] Setting ErrFile to fd 2...
	I0109 00:03:41.562304 1684539 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:03:41.568455 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:41.572776 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:41.644317 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:41.944094 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:42.065764 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:42.072682 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:42.145058 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:42.444657 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:42.566944 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:42.573856 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:42.644879 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:42.943389 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:43.067507 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:43.072575 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:43.145936 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:43.443993 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:43.567097 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:43.597604 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:43.645376 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:43.943858 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:44.069203 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:44.074031 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:44.145535 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:44.450117 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:44.566639 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:44.572939 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:44.644936 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:44.943878 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:45.067911 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:45.075284 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:45.145495 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:45.443051 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:45.565978 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:45.572379 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0109 00:03:45.644294 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:45.948942 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:46.065479 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:46.072640 1684539 kapi.go:107] duration metric: took 1m4.007024026s to wait for kubernetes.io/minikube-addons=registry ...
	I0109 00:03:46.144533 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:46.444246 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:46.571883 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:46.645121 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:46.943524 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:47.088830 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:47.145256 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:47.446860 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:47.566232 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:47.644898 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:47.942831 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:48.066711 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:48.145325 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:48.443141 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:48.566617 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:48.650642 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:48.942984 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:49.067840 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:49.145528 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:49.447678 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:49.567184 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:49.647969 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:49.951233 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:50.067473 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:50.146056 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:50.523733 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:50.568095 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:50.647711 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:50.944182 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:51.068300 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:51.145348 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:51.444949 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:51.563525 1684539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:03:51.573030 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:51.594856 1684539 api_server.go:72] duration metric: took 1m14.55115726s to wait for apiserver process to appear ...
	I0109 00:03:51.594882 1684539 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:03:51.594943 1684539 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:03:51.595020 1684539 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:03:51.647569 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:51.679359 1684539 cri.go:89] found id: "62783a6bd185f396cb47eb0c07714d63bd5492ab074f91c3541bdb334eed0f96"
	I0109 00:03:51.679383 1684539 cri.go:89] found id: ""
	I0109 00:03:51.679391 1684539 logs.go:284] 1 containers: [62783a6bd185f396cb47eb0c07714d63bd5492ab074f91c3541bdb334eed0f96]
	I0109 00:03:51.679470 1684539 ssh_runner.go:195] Run: which crictl
	I0109 00:03:51.684948 1684539 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:03:51.685067 1684539 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:03:51.744174 1684539 cri.go:89] found id: "1011042feffcab2580c6186d821b65221c50d7197c9277d66b72a69e20117153"
	I0109 00:03:51.744199 1684539 cri.go:89] found id: ""
	I0109 00:03:51.744208 1684539 logs.go:284] 1 containers: [1011042feffcab2580c6186d821b65221c50d7197c9277d66b72a69e20117153]
	I0109 00:03:51.744295 1684539 ssh_runner.go:195] Run: which crictl
	I0109 00:03:51.756287 1684539 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:03:51.756389 1684539 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:03:51.834396 1684539 cri.go:89] found id: "b7a4ca1bd2b68b45e2e49096733a1a921f7244222487f8532eedf55bf9fc310a"
	I0109 00:03:51.834418 1684539 cri.go:89] found id: ""
	I0109 00:03:51.834501 1684539 logs.go:284] 1 containers: [b7a4ca1bd2b68b45e2e49096733a1a921f7244222487f8532eedf55bf9fc310a]
	I0109 00:03:51.834589 1684539 ssh_runner.go:195] Run: which crictl
	I0109 00:03:51.840041 1684539 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:03:51.840107 1684539 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:03:51.903977 1684539 cri.go:89] found id: "ca546a071dfbff1b1fc40b95cfb1af00bc76cb73e484276c72fa0b1f9c46aeec"
	I0109 00:03:51.904049 1684539 cri.go:89] found id: ""
	I0109 00:03:51.904077 1684539 logs.go:284] 1 containers: [ca546a071dfbff1b1fc40b95cfb1af00bc76cb73e484276c72fa0b1f9c46aeec]
	I0109 00:03:51.904157 1684539 ssh_runner.go:195] Run: which crictl
	I0109 00:03:51.909479 1684539 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:03:51.909574 1684539 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:03:51.943448 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:51.962066 1684539 cri.go:89] found id: "c83f5ce969abb821559581b8daa7137b2e93fabc70e7eb5fd6bd59e8f3a5d791"
	I0109 00:03:51.962136 1684539 cri.go:89] found id: ""
	I0109 00:03:51.962161 1684539 logs.go:284] 1 containers: [c83f5ce969abb821559581b8daa7137b2e93fabc70e7eb5fd6bd59e8f3a5d791]
	I0109 00:03:51.962256 1684539 ssh_runner.go:195] Run: which crictl
	I0109 00:03:51.967238 1684539 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:03:51.967354 1684539 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:03:52.054134 1684539 cri.go:89] found id: "a57018d4c10a853439070e81b0334c348d0a675fb9830a2b3876094408d092db"
	I0109 00:03:52.054201 1684539 cri.go:89] found id: ""
	I0109 00:03:52.054222 1684539 logs.go:284] 1 containers: [a57018d4c10a853439070e81b0334c348d0a675fb9830a2b3876094408d092db]
	I0109 00:03:52.054322 1684539 ssh_runner.go:195] Run: which crictl
	I0109 00:03:52.061478 1684539 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:03:52.061597 1684539 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:03:52.069412 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:52.145352 1684539 cri.go:89] found id: "b0dd2e239ba472bd999437617e44545f1256b50285fac10dbda59ac5e2f56bcc"
	I0109 00:03:52.145421 1684539 cri.go:89] found id: ""
	I0109 00:03:52.145455 1684539 logs.go:284] 1 containers: [b0dd2e239ba472bd999437617e44545f1256b50285fac10dbda59ac5e2f56bcc]
	I0109 00:03:52.145539 1684539 ssh_runner.go:195] Run: which crictl
	I0109 00:03:52.147759 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:52.150506 1684539 logs.go:123] Gathering logs for coredns [b7a4ca1bd2b68b45e2e49096733a1a921f7244222487f8532eedf55bf9fc310a] ...
	I0109 00:03:52.150575 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7a4ca1bd2b68b45e2e49096733a1a921f7244222487f8532eedf55bf9fc310a"
	I0109 00:03:52.201719 1684539 logs.go:123] Gathering logs for kube-scheduler [ca546a071dfbff1b1fc40b95cfb1af00bc76cb73e484276c72fa0b1f9c46aeec] ...
	I0109 00:03:52.201793 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca546a071dfbff1b1fc40b95cfb1af00bc76cb73e484276c72fa0b1f9c46aeec"
	I0109 00:03:52.249951 1684539 logs.go:123] Gathering logs for kube-proxy [c83f5ce969abb821559581b8daa7137b2e93fabc70e7eb5fd6bd59e8f3a5d791] ...
	I0109 00:03:52.249985 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c83f5ce969abb821559581b8daa7137b2e93fabc70e7eb5fd6bd59e8f3a5d791"
	I0109 00:03:52.295122 1684539 logs.go:123] Gathering logs for kube-controller-manager [a57018d4c10a853439070e81b0334c348d0a675fb9830a2b3876094408d092db] ...
	I0109 00:03:52.295149 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a57018d4c10a853439070e81b0334c348d0a675fb9830a2b3876094408d092db"
	I0109 00:03:52.380754 1684539 logs.go:123] Gathering logs for kubelet ...
	I0109 00:03:52.380794 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0109 00:03:52.443579 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0109 00:03:52.456781 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.426876    1347 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-983119" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-983119' and this object
	W0109 00:03:52.457007 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.426965    1347 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-983119" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-983119' and this object
	W0109 00:03:52.457186 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.427144    1347 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-983119" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-983119' and this object
	W0109 00:03:52.457379 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.427169    1347 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-983119" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-983119' and this object
	W0109 00:03:52.457565 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.428403    1347 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-983119" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-983119' and this object
	W0109 00:03:52.457773 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.428439    1347 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-983119" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-983119' and this object
	W0109 00:03:52.457960 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.428481    1347 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-983119' and this object
	W0109 00:03:52.458163 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.428493    1347 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-983119' and this object
	W0109 00:03:52.458347 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.428524    1347 reflector.go:535] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-983119' and this object
	W0109 00:03:52.458588 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.428533    1347 reflector.go:147] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-983119' and this object
	W0109 00:03:52.458784 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.428565    1347 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-983119' and this object
	W0109 00:03:52.458994 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.428574    1347 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-983119' and this object
	W0109 00:03:52.459172 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.428612    1347 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-983119' and this object
	W0109 00:03:52.459370 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.428621    1347 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-983119' and this object
	I0109 00:03:52.490618 1684539 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:03:52.490646 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:03:52.566716 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:52.645033 1684539 logs.go:123] Gathering logs for kube-apiserver [62783a6bd185f396cb47eb0c07714d63bd5492ab074f91c3541bdb334eed0f96] ...
	I0109 00:03:52.645064 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62783a6bd185f396cb47eb0c07714d63bd5492ab074f91c3541bdb334eed0f96"
	I0109 00:03:52.648383 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:52.712940 1684539 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:03:52.712977 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:03:52.809403 1684539 logs.go:123] Gathering logs for container status ...
	I0109 00:03:52.809440 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:03:52.863322 1684539 logs.go:123] Gathering logs for dmesg ...
	I0109 00:03:52.863353 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:03:52.884271 1684539 logs.go:123] Gathering logs for etcd [1011042feffcab2580c6186d821b65221c50d7197c9277d66b72a69e20117153] ...
	I0109 00:03:52.884347 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1011042feffcab2580c6186d821b65221c50d7197c9277d66b72a69e20117153"
	I0109 00:03:52.958562 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:52.980695 1684539 logs.go:123] Gathering logs for kindnet [b0dd2e239ba472bd999437617e44545f1256b50285fac10dbda59ac5e2f56bcc] ...
	I0109 00:03:52.980730 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0dd2e239ba472bd999437617e44545f1256b50285fac10dbda59ac5e2f56bcc"
	I0109 00:03:53.046043 1684539 out.go:309] Setting ErrFile to fd 2...
	I0109 00:03:53.046071 1684539 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:03:53.046140 1684539 out.go:239] X Problems detected in kubelet:
	W0109 00:03:53.046155 1684539 out.go:239]   Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.428533    1347 reflector.go:147] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-983119' and this object
	W0109 00:03:53.046163 1684539 out.go:239]   Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.428565    1347 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-983119' and this object
	W0109 00:03:53.046202 1684539 out.go:239]   Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.428574    1347 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-983119' and this object
	W0109 00:03:53.046211 1684539 out.go:239]   Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.428612    1347 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-983119' and this object
	W0109 00:03:53.046222 1684539 out.go:239]   Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.428621    1347 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-983119' and this object
	I0109 00:03:53.046228 1684539 out.go:309] Setting ErrFile to fd 2...
	I0109 00:03:53.046241 1684539 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:03:53.066680 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:53.144730 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:53.443078 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:53.571800 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:53.646758 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:53.944628 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:54.067804 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:54.145619 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:54.450387 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:54.566128 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:54.644593 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0109 00:03:54.942858 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:55.066289 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:55.145378 1684539 kapi.go:107] duration metric: took 1m9.004613194s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0109 00:03:55.148335 1684539 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-983119 cluster.
	I0109 00:03:55.150726 1684539 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0109 00:03:55.153152 1684539 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0109 00:03:55.443800 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:55.566709 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:55.943331 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:56.065846 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:56.442863 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:56.566621 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:56.958763 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:57.071267 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:57.451341 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:57.566464 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:57.967853 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:58.067033 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:58.443217 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:58.566480 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:58.943369 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:59.068771 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:59.444960 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:03:59.568603 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:03:59.943191 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:04:00.071193 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:00.442841 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:04:00.566175 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:00.946530 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:04:01.072584 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:01.443424 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:04:01.571758 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:01.943142 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:04:02.074542 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:02.444955 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:04:02.566197 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:02.944415 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:04:03.047283 1684539 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0109 00:04:03.060130 1684539 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0109 00:04:03.063019 1684539 api_server.go:141] control plane version: v1.28.4
	I0109 00:04:03.063096 1684539 api_server.go:131] duration metric: took 11.468206192s to wait for apiserver health ...
	I0109 00:04:03.063121 1684539 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:04:03.063166 1684539 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0109 00:04:03.063258 1684539 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0109 00:04:03.066033 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:03.130017 1684539 cri.go:89] found id: "62783a6bd185f396cb47eb0c07714d63bd5492ab074f91c3541bdb334eed0f96"
	I0109 00:04:03.130039 1684539 cri.go:89] found id: ""
	I0109 00:04:03.130047 1684539 logs.go:284] 1 containers: [62783a6bd185f396cb47eb0c07714d63bd5492ab074f91c3541bdb334eed0f96]
	I0109 00:04:03.130105 1684539 ssh_runner.go:195] Run: which crictl
	I0109 00:04:03.136931 1684539 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0109 00:04:03.136996 1684539 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0109 00:04:03.195730 1684539 cri.go:89] found id: "1011042feffcab2580c6186d821b65221c50d7197c9277d66b72a69e20117153"
	I0109 00:04:03.195757 1684539 cri.go:89] found id: ""
	I0109 00:04:03.195765 1684539 logs.go:284] 1 containers: [1011042feffcab2580c6186d821b65221c50d7197c9277d66b72a69e20117153]
	I0109 00:04:03.195821 1684539 ssh_runner.go:195] Run: which crictl
	I0109 00:04:03.200898 1684539 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0109 00:04:03.201004 1684539 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0109 00:04:03.258411 1684539 cri.go:89] found id: "b7a4ca1bd2b68b45e2e49096733a1a921f7244222487f8532eedf55bf9fc310a"
	I0109 00:04:03.258472 1684539 cri.go:89] found id: ""
	I0109 00:04:03.258481 1684539 logs.go:284] 1 containers: [b7a4ca1bd2b68b45e2e49096733a1a921f7244222487f8532eedf55bf9fc310a]
	I0109 00:04:03.258573 1684539 ssh_runner.go:195] Run: which crictl
	I0109 00:04:03.264374 1684539 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0109 00:04:03.264489 1684539 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0109 00:04:03.316019 1684539 cri.go:89] found id: "ca546a071dfbff1b1fc40b95cfb1af00bc76cb73e484276c72fa0b1f9c46aeec"
	I0109 00:04:03.316042 1684539 cri.go:89] found id: ""
	I0109 00:04:03.316050 1684539 logs.go:284] 1 containers: [ca546a071dfbff1b1fc40b95cfb1af00bc76cb73e484276c72fa0b1f9c46aeec]
	I0109 00:04:03.316137 1684539 ssh_runner.go:195] Run: which crictl
	I0109 00:04:03.321204 1684539 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0109 00:04:03.321298 1684539 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0109 00:04:03.384862 1684539 cri.go:89] found id: "c83f5ce969abb821559581b8daa7137b2e93fabc70e7eb5fd6bd59e8f3a5d791"
	I0109 00:04:03.384887 1684539 cri.go:89] found id: ""
	I0109 00:04:03.384896 1684539 logs.go:284] 1 containers: [c83f5ce969abb821559581b8daa7137b2e93fabc70e7eb5fd6bd59e8f3a5d791]
	I0109 00:04:03.384986 1684539 ssh_runner.go:195] Run: which crictl
	I0109 00:04:03.390857 1684539 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0109 00:04:03.390963 1684539 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0109 00:04:03.454876 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:04:03.457771 1684539 cri.go:89] found id: "a57018d4c10a853439070e81b0334c348d0a675fb9830a2b3876094408d092db"
	I0109 00:04:03.457794 1684539 cri.go:89] found id: ""
	I0109 00:04:03.457802 1684539 logs.go:284] 1 containers: [a57018d4c10a853439070e81b0334c348d0a675fb9830a2b3876094408d092db]
	I0109 00:04:03.457871 1684539 ssh_runner.go:195] Run: which crictl
	I0109 00:04:03.467147 1684539 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0109 00:04:03.467239 1684539 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0109 00:04:03.524974 1684539 cri.go:89] found id: "b0dd2e239ba472bd999437617e44545f1256b50285fac10dbda59ac5e2f56bcc"
	I0109 00:04:03.525000 1684539 cri.go:89] found id: ""
	I0109 00:04:03.525008 1684539 logs.go:284] 1 containers: [b0dd2e239ba472bd999437617e44545f1256b50285fac10dbda59ac5e2f56bcc]
	I0109 00:04:03.525088 1684539 ssh_runner.go:195] Run: which crictl
	I0109 00:04:03.530549 1684539 logs.go:123] Gathering logs for dmesg ...
	I0109 00:04:03.530579 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0109 00:04:03.565809 1684539 logs.go:123] Gathering logs for describe nodes ...
	I0109 00:04:03.565840 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0109 00:04:03.571990 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:03.779094 1684539 logs.go:123] Gathering logs for kube-scheduler [ca546a071dfbff1b1fc40b95cfb1af00bc76cb73e484276c72fa0b1f9c46aeec] ...
	I0109 00:04:03.779127 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca546a071dfbff1b1fc40b95cfb1af00bc76cb73e484276c72fa0b1f9c46aeec"
	I0109 00:04:03.856981 1684539 logs.go:123] Gathering logs for kube-controller-manager [a57018d4c10a853439070e81b0334c348d0a675fb9830a2b3876094408d092db] ...
	I0109 00:04:03.857014 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a57018d4c10a853439070e81b0334c348d0a675fb9830a2b3876094408d092db"
	I0109 00:04:03.943595 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:04:03.998246 1684539 logs.go:123] Gathering logs for kindnet [b0dd2e239ba472bd999437617e44545f1256b50285fac10dbda59ac5e2f56bcc] ...
	I0109 00:04:03.998284 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0dd2e239ba472bd999437617e44545f1256b50285fac10dbda59ac5e2f56bcc"
	I0109 00:04:04.065645 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:04.086358 1684539 logs.go:123] Gathering logs for CRI-O ...
	I0109 00:04:04.086386 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0109 00:04:04.197157 1684539 logs.go:123] Gathering logs for container status ...
	I0109 00:04:04.197195 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0109 00:04:04.328750 1684539 logs.go:123] Gathering logs for kubelet ...
	I0109 00:04:04.328780 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0109 00:04:04.386358 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.426876    1347 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-983119" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-983119' and this object
	W0109 00:04:04.386664 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.426965    1347 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-983119" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-983119' and this object
	W0109 00:04:04.386862 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.427144    1347 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-983119" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-983119' and this object
	W0109 00:04:04.387076 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.427169    1347 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-983119" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-983119' and this object
	W0109 00:04:04.387283 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.428403    1347 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-983119" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-983119' and this object
	W0109 00:04:04.387506 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.428439    1347 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-983119" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-983119' and this object
	W0109 00:04:04.387704 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.428481    1347 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-983119' and this object
	W0109 00:04:04.387921 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.428493    1347 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-983119' and this object
	W0109 00:04:04.388129 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.428524    1347 reflector.go:535] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-983119' and this object
	W0109 00:04:04.388358 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.428533    1347 reflector.go:147] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-983119' and this object
	W0109 00:04:04.388562 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.428565    1347 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-983119' and this object
	W0109 00:04:04.388783 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.428574    1347 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-983119' and this object
	W0109 00:04:04.388978 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.428612    1347 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-983119' and this object
	W0109 00:04:04.389195 1684539 logs.go:138] Found kubelet problem: Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.428621    1347 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-983119' and this object
	I0109 00:04:04.433200 1684539 logs.go:123] Gathering logs for kube-apiserver [62783a6bd185f396cb47eb0c07714d63bd5492ab074f91c3541bdb334eed0f96] ...
	I0109 00:04:04.433278 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62783a6bd185f396cb47eb0c07714d63bd5492ab074f91c3541bdb334eed0f96"
	I0109 00:04:04.452318 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:04:04.539542 1684539 logs.go:123] Gathering logs for etcd [1011042feffcab2580c6186d821b65221c50d7197c9277d66b72a69e20117153] ...
	I0109 00:04:04.539613 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1011042feffcab2580c6186d821b65221c50d7197c9277d66b72a69e20117153"
	I0109 00:04:04.567131 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:04.640431 1684539 logs.go:123] Gathering logs for coredns [b7a4ca1bd2b68b45e2e49096733a1a921f7244222487f8532eedf55bf9fc310a] ...
	I0109 00:04:04.640501 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7a4ca1bd2b68b45e2e49096733a1a921f7244222487f8532eedf55bf9fc310a"
	I0109 00:04:04.694929 1684539 logs.go:123] Gathering logs for kube-proxy [c83f5ce969abb821559581b8daa7137b2e93fabc70e7eb5fd6bd59e8f3a5d791] ...
	I0109 00:04:04.694997 1684539 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c83f5ce969abb821559581b8daa7137b2e93fabc70e7eb5fd6bd59e8f3a5d791"
	I0109 00:04:04.740170 1684539 out.go:309] Setting ErrFile to fd 2...
	I0109 00:04:04.740194 1684539 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0109 00:04:04.740265 1684539 out.go:239] X Problems detected in kubelet:
	W0109 00:04:04.740277 1684539 out.go:239]   Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.428533    1347 reflector.go:147] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-983119' and this object
	W0109 00:04:04.740284 1684539 out.go:239]   Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.428565    1347 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-983119' and this object
	W0109 00:04:04.740325 1684539 out.go:239]   Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.428574    1347 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-983119' and this object
	W0109 00:04:04.740338 1684539 out.go:239]   Jan 09 00:03:08 addons-983119 kubelet[1347]: W0109 00:03:08.428612    1347 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-983119' and this object
	W0109 00:04:04.740346 1684539 out.go:239]   Jan 09 00:03:08 addons-983119 kubelet[1347]: E0109 00:03:08.428621    1347 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-983119" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-983119' and this object
	I0109 00:04:04.740354 1684539 out.go:309] Setting ErrFile to fd 2...
	I0109 00:04:04.740361 1684539 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:04:04.945038 1684539 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0109 00:04:05.065666 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:05.443663 1684539 kapi.go:107] duration metric: took 1m23.006504261s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0109 00:04:05.565436 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:06.066367 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:06.566101 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:07.065941 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:07.565449 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:08.066358 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:08.566579 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:09.066548 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:09.565880 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:10.065707 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:10.565702 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:11.067352 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:11.565828 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:12.069103 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:12.565405 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:13.065816 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:13.568176 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:14.066043 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:14.565605 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:14.754850 1684539 system_pods.go:59] 18 kube-system pods found
	I0109 00:04:14.754885 1684539 system_pods.go:61] "coredns-5dd5756b68-vzg2p" [02b5f7ac-6843-4735-b5b6-83ed4c6a112c] Running
	I0109 00:04:14.754892 1684539 system_pods.go:61] "csi-hostpath-attacher-0" [f45999a3-2beb-4473-a73a-1318744b0f34] Running
	I0109 00:04:14.754898 1684539 system_pods.go:61] "csi-hostpath-resizer-0" [b08e6cbe-dfe7-4e8f-bb80-52e4101cbd25] Running
	I0109 00:04:14.754919 1684539 system_pods.go:61] "csi-hostpathplugin-mpd4h" [a9a5d1e2-f5be-4613-a127-85371e1948e5] Running
	I0109 00:04:14.754931 1684539 system_pods.go:61] "etcd-addons-983119" [cf20e9a6-cfbb-484a-a00d-9d33240c5127] Running
	I0109 00:04:14.754937 1684539 system_pods.go:61] "kindnet-t4gmv" [d67a419c-8cdb-4190-8076-105865945372] Running
	I0109 00:04:14.754949 1684539 system_pods.go:61] "kube-apiserver-addons-983119" [75df9214-44f4-47e2-b770-422a4687e05e] Running
	I0109 00:04:14.754955 1684539 system_pods.go:61] "kube-controller-manager-addons-983119" [b9864132-2da6-40b4-9b8d-c315202e80cd] Running
	I0109 00:04:14.754968 1684539 system_pods.go:61] "kube-ingress-dns-minikube" [220059cf-253b-4ce8-8a22-9950d48be27e] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0109 00:04:14.754974 1684539 system_pods.go:61] "kube-proxy-4864k" [d8eba8dc-df5a-4622-9a2c-c85a25decbbe] Running
	I0109 00:04:14.754997 1684539 system_pods.go:61] "kube-scheduler-addons-983119" [0ae6afbf-048b-4a3b-b030-51e9d370c4e8] Running
	I0109 00:04:14.755011 1684539 system_pods.go:61] "metrics-server-7c66d45ddc-wvvsq" [3b0056d9-627e-46f2-a86a-e7f4cc7ca3da] Running
	I0109 00:04:14.755018 1684539 system_pods.go:61] "nvidia-device-plugin-daemonset-2qj49" [5d4c1201-ce21-4462-b0d3-1bf7598039b3] Running
	I0109 00:04:14.755029 1684539 system_pods.go:61] "registry-lwbmr" [10d6756f-1d99-487a-9be4-279128cdb09c] Running
	I0109 00:04:14.755035 1684539 system_pods.go:61] "registry-proxy-dbt9w" [a1758f6f-e461-403a-82f6-be54e122eb97] Running
	I0109 00:04:14.755040 1684539 system_pods.go:61] "snapshot-controller-58dbcc7b99-s8qf4" [e41be95b-cf22-4ebe-9d04-0d80b839aa67] Running
	I0109 00:04:14.755049 1684539 system_pods.go:61] "snapshot-controller-58dbcc7b99-w652z" [3c24102b-c3eb-4def-88fa-aa9e69ea5428] Running
	I0109 00:04:14.755056 1684539 system_pods.go:61] "storage-provisioner" [33d32fb2-fb21-4069-909c-51de5390110f] Running
	I0109 00:04:14.755076 1684539 system_pods.go:74] duration metric: took 11.691923602s to wait for pod list to return data ...
	I0109 00:04:14.755090 1684539 default_sa.go:34] waiting for default service account to be created ...
	I0109 00:04:14.758616 1684539 default_sa.go:45] found service account: "default"
	I0109 00:04:14.758642 1684539 default_sa.go:55] duration metric: took 3.544622ms for default service account to be created ...
	I0109 00:04:14.758653 1684539 system_pods.go:116] waiting for k8s-apps to be running ...
	I0109 00:04:14.770735 1684539 system_pods.go:86] 18 kube-system pods found
	I0109 00:04:14.770805 1684539 system_pods.go:89] "coredns-5dd5756b68-vzg2p" [02b5f7ac-6843-4735-b5b6-83ed4c6a112c] Running
	I0109 00:04:14.770827 1684539 system_pods.go:89] "csi-hostpath-attacher-0" [f45999a3-2beb-4473-a73a-1318744b0f34] Running
	I0109 00:04:14.770849 1684539 system_pods.go:89] "csi-hostpath-resizer-0" [b08e6cbe-dfe7-4e8f-bb80-52e4101cbd25] Running
	I0109 00:04:14.770871 1684539 system_pods.go:89] "csi-hostpathplugin-mpd4h" [a9a5d1e2-f5be-4613-a127-85371e1948e5] Running
	I0109 00:04:14.770893 1684539 system_pods.go:89] "etcd-addons-983119" [cf20e9a6-cfbb-484a-a00d-9d33240c5127] Running
	I0109 00:04:14.770913 1684539 system_pods.go:89] "kindnet-t4gmv" [d67a419c-8cdb-4190-8076-105865945372] Running
	I0109 00:04:14.770934 1684539 system_pods.go:89] "kube-apiserver-addons-983119" [75df9214-44f4-47e2-b770-422a4687e05e] Running
	I0109 00:04:14.770967 1684539 system_pods.go:89] "kube-controller-manager-addons-983119" [b9864132-2da6-40b4-9b8d-c315202e80cd] Running
	I0109 00:04:14.770992 1684539 system_pods.go:89] "kube-ingress-dns-minikube" [220059cf-253b-4ce8-8a22-9950d48be27e] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0109 00:04:14.771014 1684539 system_pods.go:89] "kube-proxy-4864k" [d8eba8dc-df5a-4622-9a2c-c85a25decbbe] Running
	I0109 00:04:14.771035 1684539 system_pods.go:89] "kube-scheduler-addons-983119" [0ae6afbf-048b-4a3b-b030-51e9d370c4e8] Running
	I0109 00:04:14.771055 1684539 system_pods.go:89] "metrics-server-7c66d45ddc-wvvsq" [3b0056d9-627e-46f2-a86a-e7f4cc7ca3da] Running
	I0109 00:04:14.771075 1684539 system_pods.go:89] "nvidia-device-plugin-daemonset-2qj49" [5d4c1201-ce21-4462-b0d3-1bf7598039b3] Running
	I0109 00:04:14.771094 1684539 system_pods.go:89] "registry-lwbmr" [10d6756f-1d99-487a-9be4-279128cdb09c] Running
	I0109 00:04:14.771113 1684539 system_pods.go:89] "registry-proxy-dbt9w" [a1758f6f-e461-403a-82f6-be54e122eb97] Running
	I0109 00:04:14.771132 1684539 system_pods.go:89] "snapshot-controller-58dbcc7b99-s8qf4" [e41be95b-cf22-4ebe-9d04-0d80b839aa67] Running
	I0109 00:04:14.771152 1684539 system_pods.go:89] "snapshot-controller-58dbcc7b99-w652z" [3c24102b-c3eb-4def-88fa-aa9e69ea5428] Running
	I0109 00:04:14.771173 1684539 system_pods.go:89] "storage-provisioner" [33d32fb2-fb21-4069-909c-51de5390110f] Running
	I0109 00:04:14.771194 1684539 system_pods.go:126] duration metric: took 12.515892ms to wait for k8s-apps to be running ...
	I0109 00:04:14.771214 1684539 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:04:14.771283 1684539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:04:14.786431 1684539 system_svc.go:56] duration metric: took 15.196062ms WaitForService to wait for kubelet.
	I0109 00:04:14.786537 1684539 kubeadm.go:581] duration metric: took 1m37.742844155s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:04:14.786581 1684539 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:04:14.790245 1684539 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0109 00:04:14.790321 1684539 node_conditions.go:123] node cpu capacity is 2
	I0109 00:04:14.790348 1684539 node_conditions.go:105] duration metric: took 3.746855ms to run NodePressure ...
	I0109 00:04:14.790374 1684539 start.go:228] waiting for startup goroutines ...
	I0109 00:04:15.067166 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:15.567234 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:16.067106 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:16.566102 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:17.066914 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:17.566541 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:18.067608 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:18.566804 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:19.066412 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:19.566677 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:20.066582 1684539 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0109 00:04:20.565758 1684539 kapi.go:107] duration metric: took 1m38.50670018s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0109 00:04:20.568010 1684539 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, storage-provisioner, default-storageclass, cloud-spanner, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0109 00:04:20.569886 1684539 addons.go:508] enable addons completed in 1m45.168935537s: enabled=[nvidia-device-plugin ingress-dns storage-provisioner default-storageclass cloud-spanner inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0109 00:04:20.569931 1684539 start.go:233] waiting for cluster config update ...
	I0109 00:04:20.569951 1684539 start.go:242] writing updated cluster config ...
	I0109 00:04:20.570241 1684539 ssh_runner.go:195] Run: rm -f paused
	I0109 00:04:20.892775 1684539 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0109 00:04:20.895170 1684539 out.go:177] * Done! kubectl is now configured to use "addons-983119" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 09 00:07:36 addons-983119 crio[883]: time="2024-01-09 00:07:36.059010924Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=d70cc166-b6c3-406e-8efc-d24b7cc68862 name=/runtime.v1.ImageService/ImageStatus
	Jan 09 00:07:36 addons-983119 crio[883]: time="2024-01-09 00:07:36.060606885Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=85bbaa08-bf50-41c7-a591-3a31d5379668 name=/runtime.v1.ImageService/ImageStatus
	Jan 09 00:07:36 addons-983119 crio[883]: time="2024-01-09 00:07:36.060796360Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=85bbaa08-bf50-41c7-a591-3a31d5379668 name=/runtime.v1.ImageService/ImageStatus
	Jan 09 00:07:36 addons-983119 crio[883]: time="2024-01-09 00:07:36.061627596Z" level=info msg="Creating container: default/hello-world-app-5d77478584-8c5sk/hello-world-app" id=931d4c15-f272-4bbc-9580-e9d08778d8a5 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 09 00:07:36 addons-983119 crio[883]: time="2024-01-09 00:07:36.061735987Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 09 00:07:36 addons-983119 crio[883]: time="2024-01-09 00:07:36.149225663Z" level=info msg="Created container 07f523968424850b2e559b757a27588d47a1c8f0dd1b177f26f8c88c1bbbb30d: default/hello-world-app-5d77478584-8c5sk/hello-world-app" id=931d4c15-f272-4bbc-9580-e9d08778d8a5 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 09 00:07:36 addons-983119 crio[883]: time="2024-01-09 00:07:36.150524219Z" level=info msg="Starting container: 07f523968424850b2e559b757a27588d47a1c8f0dd1b177f26f8c88c1bbbb30d" id=c0890267-936d-426b-861f-f8c3d7e46612 name=/runtime.v1.RuntimeService/StartContainer
	Jan 09 00:07:36 addons-983119 conmon[8181]: conmon 07f523968424850b2e55 <ninfo>: container 8192 exited with status 1
	Jan 09 00:07:36 addons-983119 crio[883]: time="2024-01-09 00:07:36.164390067Z" level=info msg="Started container" PID=8192 containerID=07f523968424850b2e559b757a27588d47a1c8f0dd1b177f26f8c88c1bbbb30d description=default/hello-world-app-5d77478584-8c5sk/hello-world-app id=c0890267-936d-426b-861f-f8c3d7e46612 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dd4b67fdf7f7cbbfa63528e61489810835ee8d6a9faddda69eb63e150c0af909
	Jan 09 00:07:37 addons-983119 crio[883]: time="2024-01-09 00:07:37.109807252Z" level=info msg="Removing container: 346220f49bc93784a5d8c976fee6125051ad50889e1e437fe3e733cfc23467bf" id=c7047d41-0184-4142-9ed2-dbee855cb608 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 09 00:07:37 addons-983119 crio[883]: time="2024-01-09 00:07:37.138550305Z" level=info msg="Removed container 346220f49bc93784a5d8c976fee6125051ad50889e1e437fe3e733cfc23467bf: default/hello-world-app-5d77478584-8c5sk/hello-world-app" id=c7047d41-0184-4142-9ed2-dbee855cb608 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 09 00:07:37 addons-983119 crio[883]: time="2024-01-09 00:07:37.855770523Z" level=warning msg="Stopping container 15d99037b4888445a40f7bf7d6a98dd3d4fdd9a6f51a5ef0b6eecc47f1326fbb with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=ac2dbd07-e80b-408d-ad6b-7cf946dbe3f4 name=/runtime.v1.RuntimeService/StopContainer
	Jan 09 00:07:37 addons-983119 conmon[5356]: conmon 15d99037b4888445a40f <ninfo>: container 5367 exited with status 137
	Jan 09 00:07:38 addons-983119 crio[883]: time="2024-01-09 00:07:38.011126528Z" level=info msg="Stopped container 15d99037b4888445a40f7bf7d6a98dd3d4fdd9a6f51a5ef0b6eecc47f1326fbb: ingress-nginx/ingress-nginx-controller-69cff4fd79-slprc/controller" id=ac2dbd07-e80b-408d-ad6b-7cf946dbe3f4 name=/runtime.v1.RuntimeService/StopContainer
	Jan 09 00:07:38 addons-983119 crio[883]: time="2024-01-09 00:07:38.011639607Z" level=info msg="Stopping pod sandbox: 4714ce52227fc8c2695fc842350bbb7619ff0f2d33c71da69ad3aa0520342928" id=8f5cc370-1368-472f-a37e-068c6dd481d5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 09 00:07:38 addons-983119 crio[883]: time="2024-01-09 00:07:38.015236866Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-U5WRAVQ5WKRQDHEC - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-BX2QRBOUKCVI2LJ4 - [0:0]\n-X KUBE-HP-BX2QRBOUKCVI2LJ4\n-X KUBE-HP-U5WRAVQ5WKRQDHEC\nCOMMIT\n"
	Jan 09 00:07:38 addons-983119 crio[883]: time="2024-01-09 00:07:38.016878087Z" level=info msg="Closing host port tcp:80"
	Jan 09 00:07:38 addons-983119 crio[883]: time="2024-01-09 00:07:38.016923659Z" level=info msg="Closing host port tcp:443"
	Jan 09 00:07:38 addons-983119 crio[883]: time="2024-01-09 00:07:38.018528145Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jan 09 00:07:38 addons-983119 crio[883]: time="2024-01-09 00:07:38.018555715Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jan 09 00:07:38 addons-983119 crio[883]: time="2024-01-09 00:07:38.018733242Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-69cff4fd79-slprc Namespace:ingress-nginx ID:4714ce52227fc8c2695fc842350bbb7619ff0f2d33c71da69ad3aa0520342928 UID:8233e4d8-7555-4921-95ba-f93a031df6a2 NetNS:/var/run/netns/4def0cc0-b1ab-4dc1-97da-91af1faf69be Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 09 00:07:38 addons-983119 crio[883]: time="2024-01-09 00:07:38.018872854Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-69cff4fd79-slprc from CNI network \"kindnet\" (type=ptp)"
	Jan 09 00:07:38 addons-983119 crio[883]: time="2024-01-09 00:07:38.040107064Z" level=info msg="Stopped pod sandbox: 4714ce52227fc8c2695fc842350bbb7619ff0f2d33c71da69ad3aa0520342928" id=8f5cc370-1368-472f-a37e-068c6dd481d5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 09 00:07:38 addons-983119 crio[883]: time="2024-01-09 00:07:38.114048245Z" level=info msg="Removing container: 15d99037b4888445a40f7bf7d6a98dd3d4fdd9a6f51a5ef0b6eecc47f1326fbb" id=6219bf9b-0e7d-4b6c-9fcb-96e163ead0aa name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 09 00:07:38 addons-983119 crio[883]: time="2024-01-09 00:07:38.130896718Z" level=info msg="Removed container 15d99037b4888445a40f7bf7d6a98dd3d4fdd9a6f51a5ef0b6eecc47f1326fbb: ingress-nginx/ingress-nginx-controller-69cff4fd79-slprc/controller" id=6219bf9b-0e7d-4b6c-9fcb-96e163ead0aa name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	07f5239684248       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                             7 seconds ago       Exited              hello-world-app           2                   dd4b67fdf7f7c       hello-world-app-5d77478584-8c5sk
	f8a0a7b27853b       ghcr.io/headlamp-k8s/headlamp@sha256:0fe50c48c186b89ff3d341dba427174d8232a64c3062af5de854a3a7cb2105ce                        55 seconds ago      Running             headlamp                  0                   bff65cc636307       headlamp-7ddfbb94ff-b9vrz
	abb1267b7c979       docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb                              2 minutes ago       Running             nginx                     0                   7639dcdc75d34       nginx
	43b75c644b360       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                 3 minutes ago       Running             gcp-auth                  0                   4e5a075360e9f       gcp-auth-d4c87556c-q94hn
	0a3369e09ae0a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5   3 minutes ago       Exited              patch                     0                   0867bdb9547c6       ingress-nginx-admission-patch-ns7xb
	1faf4bfd4e4b8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5   3 minutes ago       Exited              create                    0                   1c1a666a6a4a0       ingress-nginx-admission-create-n5ghw
	0a58f0cbb6c0c       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              4 minutes ago       Running             yakd                      0                   0313b1e3f4ac9       yakd-dashboard-9947fc6bf-fh4qw
	da0aaa3251b2d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             4 minutes ago       Running             storage-provisioner       0                   6aa8ef465cfbc       storage-provisioner
	b7a4ca1bd2b68       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             4 minutes ago       Running             coredns                   0                   cb696a3cc2348       coredns-5dd5756b68-vzg2p
	c83f5ce969abb       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                                             5 minutes ago       Running             kube-proxy                0                   849104685bc50       kube-proxy-4864k
	b0dd2e239ba47       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                             5 minutes ago       Running             kindnet-cni               0                   d0b6d019a656f       kindnet-t4gmv
	1011042feffca       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                             5 minutes ago       Running             etcd                      0                   c007c4405617b       etcd-addons-983119
	a57018d4c10a8       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                                             5 minutes ago       Running             kube-controller-manager   0                   deaf483cff26b       kube-controller-manager-addons-983119
	62783a6bd185f       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                                             5 minutes ago       Running             kube-apiserver            0                   6451b58fc799d       kube-apiserver-addons-983119
	ca546a071dfbf       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                                             5 minutes ago       Running             kube-scheduler            0                   a6c1d05a18922       kube-scheduler-addons-983119
	
	
	==> coredns [b7a4ca1bd2b68b45e2e49096733a1a921f7244222487f8532eedf55bf9fc310a] <==
	[INFO] 10.244.0.20:35414 - 58298 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000060719s
	[INFO] 10.244.0.20:35414 - 39104 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000776367s
	[INFO] 10.244.0.20:51476 - 10768 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002558357s
	[INFO] 10.244.0.20:51476 - 31572 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001237804s
	[INFO] 10.244.0.20:35414 - 14380 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001693471s
	[INFO] 10.244.0.20:51476 - 30868 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000243374s
	[INFO] 10.244.0.20:35414 - 14461 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000229467s
	[INFO] 10.244.0.20:50571 - 41147 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000099956s
	[INFO] 10.244.0.20:50571 - 25568 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000050503s
	[INFO] 10.244.0.20:36327 - 3274 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000044349s
	[INFO] 10.244.0.20:36327 - 36088 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000053457s
	[INFO] 10.244.0.20:36327 - 60629 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000094967s
	[INFO] 10.244.0.20:36327 - 39967 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00005367s
	[INFO] 10.244.0.20:36327 - 56067 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000110679s
	[INFO] 10.244.0.20:36327 - 15709 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000101466s
	[INFO] 10.244.0.20:50571 - 42929 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000092292s
	[INFO] 10.244.0.20:50571 - 30905 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000076694s
	[INFO] 10.244.0.20:50571 - 13744 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061777s
	[INFO] 10.244.0.20:50571 - 8956 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000315572s
	[INFO] 10.244.0.20:36327 - 31961 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001538713s
	[INFO] 10.244.0.20:50571 - 39946 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001395392s
	[INFO] 10.244.0.20:36327 - 30665 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001303445s
	[INFO] 10.244.0.20:36327 - 12523 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000049371s
	[INFO] 10.244.0.20:50571 - 11887 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001004505s
	[INFO] 10.244.0.20:50571 - 43097 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000052357s
	
	
	==> describe nodes <==
	Name:               addons-983119
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-983119
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=addons-983119
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_09T00_02_23_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-983119
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jan 2024 00:02:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-983119
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jan 2024 00:07:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jan 2024 00:07:28 +0000   Tue, 09 Jan 2024 00:02:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jan 2024 00:07:28 +0000   Tue, 09 Jan 2024 00:02:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jan 2024 00:07:28 +0000   Tue, 09 Jan 2024 00:02:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jan 2024 00:07:28 +0000   Tue, 09 Jan 2024 00:03:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-983119
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 0ceffc9f88504a42b318648234f14b7e
	  System UUID:                92a74203-e5f3-430a-ac59-55776e1d4abf
	  Boot ID:                    9a753e90-64b1-452a-8e10-9b878947801f
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-8c5sk         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  gcp-auth                    gcp-auth-d4c87556c-q94hn                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  headlamp                    headlamp-7ddfbb94ff-b9vrz                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 coredns-5dd5756b68-vzg2p                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m8s
	  kube-system                 etcd-addons-983119                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m23s
	  kube-system                 kindnet-t4gmv                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m8s
	  kube-system                 kube-apiserver-addons-983119             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 kube-controller-manager-addons-983119    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 kube-proxy-4864k                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 kube-scheduler-addons-983119             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-fh4qw           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     5m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m1s   kube-proxy       
	  Normal  Starting                 5m21s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m21s  kubelet          Node addons-983119 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m21s  kubelet          Node addons-983119 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m21s  kubelet          Node addons-983119 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m8s   node-controller  Node addons-983119 event: Registered Node addons-983119 in Controller
	  Normal  NodeReady                4m35s  kubelet          Node addons-983119 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000726] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001085] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=000000003d7a4016
	[  +0.001077] FS-Cache: N-key=[8] '4274ed0000000000'
	[  +0.002783] FS-Cache: Duplicate cookie detected
	[  +0.000750] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.001073] FS-Cache: O-cookie d=000000001df03bef{9p.inode} n=000000007423a6ff
	[  +0.001120] FS-Cache: O-key=[8] '4274ed0000000000'
	[  +0.000835] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001012] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=00000000372890f4
	[  +0.001126] FS-Cache: N-key=[8] '4274ed0000000000'
	[  +2.747018] FS-Cache: Duplicate cookie detected
	[  +0.000726] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.000998] FS-Cache: O-cookie d=000000001df03bef{9p.inode} n=00000000f8adacdd
	[  +0.001174] FS-Cache: O-key=[8] '4174ed0000000000'
	[  +0.000759] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000977] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=000000003d7a4016
	[  +0.001076] FS-Cache: N-key=[8] '4174ed0000000000'
	[  +0.370147] FS-Cache: Duplicate cookie detected
	[  +0.000727] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001075] FS-Cache: O-cookie d=000000001df03bef{9p.inode} n=0000000086c37022
	[  +0.001208] FS-Cache: O-key=[8] '4774ed0000000000'
	[  +0.000721] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000994] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=0000000070425b3c
	[  +0.001176] FS-Cache: N-key=[8] '4774ed0000000000'
	[Jan 8 23:21] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> etcd [1011042feffcab2580c6186d821b65221c50d7197c9277d66b72a69e20117153] <==
	{"level":"info","ts":"2024-01-09T00:02:36.442347Z","caller":"traceutil/trace.go:171","msg":"trace[383916467] transaction","detail":"{read_only:false; response_revision:354; number_of_response:1; }","duration":"235.532748ms","start":"2024-01-09T00:02:36.206802Z","end":"2024-01-09T00:02:36.442334Z","steps":["trace[383916467] 'process raft request'  (duration: 235.492091ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-09T00:02:36.44816Z","caller":"traceutil/trace.go:171","msg":"trace[578442744] transaction","detail":"{read_only:false; response_revision:352; number_of_response:1; }","duration":"415.548563ms","start":"2024-01-09T00:02:36.032587Z","end":"2024-01-09T00:02:36.448136Z","steps":["trace[578442744] 'process raft request'  (duration: 33.545901ms)","trace[578442744] 'compare'  (duration: 184.601144ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-09T00:02:36.448314Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-09T00:02:36.03257Z","time spent":"415.688109ms","remote":"127.0.0.1:33958","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":185,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-node-lease/default\" mod_revision:324 > success:<request_put:<key:\"/registry/serviceaccounts/kube-node-lease/default\" value_size:128 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-node-lease/default\" > >"}
	{"level":"info","ts":"2024-01-09T00:02:36.571862Z","caller":"traceutil/trace.go:171","msg":"trace[1431272289] linearizableReadLoop","detail":"{readStateIndex:362; appliedIndex:361; }","duration":"451.4466ms","start":"2024-01-09T00:02:36.052066Z","end":"2024-01-09T00:02:36.503512Z","steps":["trace[1431272289] 'read index received'  (duration: 14.075044ms)","trace[1431272289] 'applied index is now lower than readState.Index'  (duration: 437.369727ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-09T00:02:36.572966Z","caller":"traceutil/trace.go:171","msg":"trace[631908117] transaction","detail":"{read_only:false; number_of_response:1; response_revision:353; }","duration":"518.297834ms","start":"2024-01-09T00:02:36.054656Z","end":"2024-01-09T00:02:36.572954Z","steps":["trace[631908117] 'process raft request'  (duration: 387.557804ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-09T00:02:36.574009Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-09T00:02:36.054641Z","time spent":"519.252173ms","remote":"127.0.0.1:33946","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":55,"response count":0,"response size":42,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-5dd5756b68-n86k4\" mod_revision:351 > success:<request_delete_range:<key:\"/registry/pods/kube-system/coredns-5dd5756b68-n86k4\" > > failure:<request_range:<key:\"/registry/pods/kube-system/coredns-5dd5756b68-n86k4\" > >"}
	{"level":"warn","ts":"2024-01-09T00:02:36.574522Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"522.977226ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4096"}
	{"level":"info","ts":"2024-01-09T00:02:36.574602Z","caller":"traceutil/trace.go:171","msg":"trace[885435202] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:354; }","duration":"523.06165ms","start":"2024-01-09T00:02:36.051531Z","end":"2024-01-09T00:02:36.574593Z","steps":["trace[885435202] 'agreement among raft nodes before linearized reading'  (duration: 522.957903ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-09T00:02:36.574652Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-09T00:02:36.051513Z","time spent":"523.131952ms","remote":"127.0.0.1:34228","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":4120,"request content":"key:\"/registry/deployments/kube-system/coredns\" "}
	{"level":"info","ts":"2024-01-09T00:02:36.742902Z","caller":"traceutil/trace.go:171","msg":"trace[1414700637] linearizableReadLoop","detail":"{readStateIndex:364; appliedIndex:364; }","duration":"170.13564ms","start":"2024-01-09T00:02:36.572752Z","end":"2024-01-09T00:02:36.742888Z","steps":["trace[1414700637] 'read index received'  (duration: 170.130954ms)","trace[1414700637] 'applied index is now lower than readState.Index'  (duration: 3.553µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-09T00:02:36.743113Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"688.328865ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-983119\" ","response":"range_response_count:1 size:5743"}
	{"level":"info","ts":"2024-01-09T00:02:36.743713Z","caller":"traceutil/trace.go:171","msg":"trace[1780712474] range","detail":"{range_begin:/registry/minions/addons-983119; range_end:; response_count:1; response_revision:354; }","duration":"688.936845ms","start":"2024-01-09T00:02:36.054765Z","end":"2024-01-09T00:02:36.743702Z","steps":["trace[1780712474] 'agreement among raft nodes before linearized reading'  (duration: 688.292819ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-09T00:02:36.743157Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"688.403524ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-983119\" ","response":"range_response_count:1 size:5743"}
	{"level":"warn","ts":"2024-01-09T00:02:36.743182Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"688.450089ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-983119\" ","response":"range_response_count:1 size:5743"}
	{"level":"warn","ts":"2024-01-09T00:02:36.74321Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"688.663415ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-983119\" ","response":"range_response_count:1 size:5743"}
	{"level":"warn","ts":"2024-01-09T00:02:36.784474Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-09T00:02:36.054762Z","time spent":"719.751326ms","remote":"127.0.0.1:33930","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":5767,"request content":"key:\"/registry/minions/addons-983119\" "}
	{"level":"info","ts":"2024-01-09T00:02:36.812141Z","caller":"traceutil/trace.go:171","msg":"trace[1691968410] range","detail":"{range_begin:/registry/minions/addons-983119; range_end:; response_count:1; response_revision:354; }","duration":"757.373176ms","start":"2024-01-09T00:02:36.054749Z","end":"2024-01-09T00:02:36.812122Z","steps":["trace[1691968410] 'agreement among raft nodes before linearized reading'  (duration: 688.38137ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-09T00:02:36.812262Z","caller":"traceutil/trace.go:171","msg":"trace[2075519328] range","detail":"{range_begin:/registry/minions/addons-983119; range_end:; response_count:1; response_revision:354; }","duration":"757.526893ms","start":"2024-01-09T00:02:36.054729Z","end":"2024-01-09T00:02:36.812255Z","steps":["trace[2075519328] 'agreement among raft nodes before linearized reading'  (duration: 688.43701ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-09T00:02:36.865491Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-09T00:02:36.054724Z","time spent":"810.725946ms","remote":"127.0.0.1:33930","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":5767,"request content":"key:\"/registry/minions/addons-983119\" "}
	{"level":"info","ts":"2024-01-09T00:02:36.812341Z","caller":"traceutil/trace.go:171","msg":"trace[1942527543] range","detail":"{range_begin:/registry/minions/addons-983119; range_end:; response_count:1; response_revision:354; }","duration":"757.792856ms","start":"2024-01-09T00:02:36.054542Z","end":"2024-01-09T00:02:36.812335Z","steps":["trace[1942527543] 'agreement among raft nodes before linearized reading'  (duration: 688.647744ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-09T00:02:36.865665Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-09T00:02:36.053295Z","time spent":"812.36331ms","remote":"127.0.0.1:33930","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":5767,"request content":"key:\"/registry/minions/addons-983119\" "}
	{"level":"warn","ts":"2024-01-09T00:02:36.812715Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-09T00:02:36.054746Z","time spent":"757.947869ms","remote":"127.0.0.1:33930","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":5767,"request content":"key:\"/registry/minions/addons-983119\" "}
	{"level":"info","ts":"2024-01-09T00:02:38.423491Z","caller":"traceutil/trace.go:171","msg":"trace[1061176484] transaction","detail":"{read_only:false; response_revision:365; number_of_response:1; }","duration":"111.638505ms","start":"2024-01-09T00:02:38.311836Z","end":"2024-01-09T00:02:38.423475Z","steps":["trace[1061176484] 'process raft request'  (duration: 111.532313ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-09T00:02:40.639505Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.462321ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/gadget/\" range_end:\"/registry/resourcequotas/gadget0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-09T00:02:40.640279Z","caller":"traceutil/trace.go:171","msg":"trace[1779008435] range","detail":"{range_begin:/registry/resourcequotas/gadget/; range_end:/registry/resourcequotas/gadget0; response_count:0; response_revision:401; }","duration":"124.2369ms","start":"2024-01-09T00:02:40.516019Z","end":"2024-01-09T00:02:40.640256Z","steps":["trace[1779008435] 'agreement among raft nodes before linearized reading'  (duration: 123.440594ms)"],"step_count":1}
	
	
	==> gcp-auth [43b75c644b360654ead2fe1318d35e028b59be9bccb9c2a716a86cfe91507e5c] <==
	2024/01/09 00:03:54 GCP Auth Webhook started!
	2024/01/09 00:04:32 Ready to marshal response ...
	2024/01/09 00:04:32 Ready to write response ...
	2024/01/09 00:04:44 Ready to marshal response ...
	2024/01/09 00:04:44 Ready to write response ...
	2024/01/09 00:04:56 Ready to marshal response ...
	2024/01/09 00:04:56 Ready to write response ...
	2024/01/09 00:05:18 Ready to marshal response ...
	2024/01/09 00:05:18 Ready to write response ...
	2024/01/09 00:05:46 Ready to marshal response ...
	2024/01/09 00:05:46 Ready to write response ...
	2024/01/09 00:05:46 Ready to marshal response ...
	2024/01/09 00:05:46 Ready to write response ...
	2024/01/09 00:05:53 Ready to marshal response ...
	2024/01/09 00:05:53 Ready to write response ...
	2024/01/09 00:06:44 Ready to marshal response ...
	2024/01/09 00:06:44 Ready to write response ...
	2024/01/09 00:06:44 Ready to marshal response ...
	2024/01/09 00:06:44 Ready to write response ...
	2024/01/09 00:06:44 Ready to marshal response ...
	2024/01/09 00:06:44 Ready to write response ...
	2024/01/09 00:07:17 Ready to marshal response ...
	2024/01/09 00:07:17 Ready to write response ...
	
	
	==> kernel <==
	 00:07:43 up  6:50,  0 users,  load average: 0.56, 1.61, 2.68
	Linux addons-983119 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [b0dd2e239ba472bd999437617e44545f1256b50285fac10dbda59ac5e2f56bcc] <==
	I0109 00:05:38.351376       1 main.go:227] handling current node
	I0109 00:05:48.362730       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0109 00:05:48.362758       1 main.go:227] handling current node
	I0109 00:05:58.373998       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0109 00:05:58.374027       1 main.go:227] handling current node
	I0109 00:06:08.385148       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0109 00:06:08.385177       1 main.go:227] handling current node
	I0109 00:06:18.398059       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0109 00:06:18.398089       1 main.go:227] handling current node
	I0109 00:06:28.401793       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0109 00:06:28.401818       1 main.go:227] handling current node
	I0109 00:06:38.414020       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0109 00:06:38.414046       1 main.go:227] handling current node
	I0109 00:06:48.418768       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0109 00:06:48.418981       1 main.go:227] handling current node
	I0109 00:06:58.422721       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0109 00:06:58.422747       1 main.go:227] handling current node
	I0109 00:07:08.435091       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0109 00:07:08.435119       1 main.go:227] handling current node
	I0109 00:07:18.447963       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0109 00:07:18.447992       1 main.go:227] handling current node
	I0109 00:07:28.452378       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0109 00:07:28.452409       1 main.go:227] handling current node
	I0109 00:07:38.464742       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0109 00:07:38.464771       1 main.go:227] handling current node
	
	
	==> kube-apiserver [62783a6bd185f396cb47eb0c07714d63bd5492ab074f91c3541bdb334eed0f96] <==
	I0109 00:04:55.572546       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0109 00:04:56.405248       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0109 00:04:56.845257       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.91.174"}
	I0109 00:05:15.562513       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0109 00:05:33.805231       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0109 00:05:33.805368       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0109 00:05:33.818068       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0109 00:05:33.818128       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0109 00:05:33.851682       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0109 00:05:33.851807       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0109 00:05:33.859632       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0109 00:05:33.860360       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0109 00:05:33.864830       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0109 00:05:33.865380       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0109 00:05:33.892576       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0109 00:05:33.892628       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0109 00:05:33.902140       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0109 00:05:33.902185       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0109 00:05:34.852322       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0109 00:05:34.903104       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0109 00:05:34.963900       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0109 00:06:09.842590       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0109 00:06:44.097435       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.39.131"}
	I0109 00:07:17.621166       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.149.112"}
	E0109 00:07:34.118790       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x40070b4240), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x4004f43220), ResponseWriter:(*httpsnoop.rw)(0x4004f43220), Flusher:(*httpsnoop.rw)(0x4004f43220), CloseNotifier:(*httpsnoop.rw)(0x4004f43220), Pusher:(*httpsnoop.rw)(0x4004f43220)}}, encoder:(*versioning.codec)(0x4005c9db80), memAllocator:(*runtime.Allocator)(0x4004790990)})
	
	
	==> kube-controller-manager [a57018d4c10a853439070e81b0334c348d0a675fb9830a2b3876094408d092db] <==
	E0109 00:06:45.635850       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0109 00:06:49.001993       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="45.744µs"
	I0109 00:06:49.042118       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="9.852911ms"
	I0109 00:06:49.042344       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="38.721µs"
	W0109 00:07:04.192789       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0109 00:07:04.192823       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0109 00:07:05.898301       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0109 00:07:05.898335       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0109 00:07:11.743474       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0109 00:07:11.743509       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0109 00:07:17.329175       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0109 00:07:17.351428       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-8c5sk"
	I0109 00:07:17.358750       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="29.365203ms"
	I0109 00:07:17.368517       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="9.627259ms"
	I0109 00:07:17.368690       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="37.539µs"
	I0109 00:07:17.384055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="67.545µs"
	I0109 00:07:20.070694       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="36.727µs"
	I0109 00:07:21.069239       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="66.815µs"
	I0109 00:07:22.071525       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="43.709µs"
	W0109 00:07:30.034300       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0109 00:07:30.034339       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0109 00:07:34.822180       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0109 00:07:34.826100       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="4.808µs"
	I0109 00:07:34.828690       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0109 00:07:37.135615       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="57.043µs"
	
	
	==> kube-proxy [c83f5ce969abb821559581b8daa7137b2e93fabc70e7eb5fd6bd59e8f3a5d791] <==
	I0109 00:02:41.136838       1 server_others.go:69] "Using iptables proxy"
	I0109 00:02:41.388836       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0109 00:02:41.698296       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0109 00:02:41.701313       1 server_others.go:152] "Using iptables Proxier"
	I0109 00:02:41.701413       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0109 00:02:41.701447       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0109 00:02:41.701573       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0109 00:02:41.701917       1 server.go:846] "Version info" version="v1.28.4"
	I0109 00:02:41.702601       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0109 00:02:41.704553       1 config.go:188] "Starting service config controller"
	I0109 00:02:41.710991       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0109 00:02:41.711053       1 config.go:97] "Starting endpoint slice config controller"
	I0109 00:02:41.711061       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0109 00:02:41.713422       1 config.go:315] "Starting node config controller"
	I0109 00:02:41.713442       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0109 00:02:41.818622       1 shared_informer.go:318] Caches are synced for node config
	I0109 00:02:41.818681       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0109 00:02:41.819040       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [ca546a071dfbff1b1fc40b95cfb1af00bc76cb73e484276c72fa0b1f9c46aeec] <==
	W0109 00:02:19.575180       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0109 00:02:19.575194       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0109 00:02:19.575233       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0109 00:02:19.575247       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0109 00:02:19.575338       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0109 00:02:19.575355       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0109 00:02:19.575412       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0109 00:02:19.575430       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0109 00:02:19.575475       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0109 00:02:19.575489       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0109 00:02:19.575629       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0109 00:02:19.575647       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0109 00:02:19.575684       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0109 00:02:19.575700       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0109 00:02:19.575750       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0109 00:02:19.575765       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0109 00:02:19.575805       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0109 00:02:19.575819       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0109 00:02:19.575858       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0109 00:02:19.575872       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0109 00:02:19.576001       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0109 00:02:19.576019       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0109 00:02:19.576253       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0109 00:02:19.576279       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0109 00:02:21.065410       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 09 00:07:22 addons-983119 kubelet[1347]: E0109 00:07:22.374191    1347 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/8c9761d666e4f376c03f2631a431663f407624c082ae5d258762a971541451bf/diff" to get inode usage: stat /var/lib/containers/storage/overlay/8c9761d666e4f376c03f2631a431663f407624c082ae5d258762a971541451bf/diff: no such file or directory, extraDiskErr: <nil>
	Jan 09 00:07:22 addons-983119 kubelet[1347]: E0109 00:07:22.385783    1347 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/55b400ee7d2378a0e0e6ea1674972e9bb613b308482b1a69959c9d960531d100/diff" to get inode usage: stat /var/lib/containers/storage/overlay/55b400ee7d2378a0e0e6ea1674972e9bb613b308482b1a69959c9d960531d100/diff: no such file or directory, extraDiskErr: <nil>
	Jan 09 00:07:22 addons-983119 kubelet[1347]: E0109 00:07:22.387949    1347 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/97f770b90fb31a1caf7cbc704d276a9efda4d283fcefad18512f9dac43051338/diff" to get inode usage: stat /var/lib/containers/storage/overlay/97f770b90fb31a1caf7cbc704d276a9efda4d283fcefad18512f9dac43051338/diff: no such file or directory, extraDiskErr: <nil>
	Jan 09 00:07:33 addons-983119 kubelet[1347]: I0109 00:07:33.533111    1347 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv5q6\" (UniqueName: \"kubernetes.io/projected/220059cf-253b-4ce8-8a22-9950d48be27e-kube-api-access-tv5q6\") pod \"220059cf-253b-4ce8-8a22-9950d48be27e\" (UID: \"220059cf-253b-4ce8-8a22-9950d48be27e\") "
	Jan 09 00:07:33 addons-983119 kubelet[1347]: I0109 00:07:33.535410    1347 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/220059cf-253b-4ce8-8a22-9950d48be27e-kube-api-access-tv5q6" (OuterVolumeSpecName: "kube-api-access-tv5q6") pod "220059cf-253b-4ce8-8a22-9950d48be27e" (UID: "220059cf-253b-4ce8-8a22-9950d48be27e"). InnerVolumeSpecName "kube-api-access-tv5q6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 09 00:07:33 addons-983119 kubelet[1347]: I0109 00:07:33.633748    1347 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tv5q6\" (UniqueName: \"kubernetes.io/projected/220059cf-253b-4ce8-8a22-9950d48be27e-kube-api-access-tv5q6\") on node \"addons-983119\" DevicePath \"\""
	Jan 09 00:07:34 addons-983119 kubelet[1347]: I0109 00:07:34.083067    1347 scope.go:117] "RemoveContainer" containerID="c92f2ab1999cdf0e7d82379430c4d8cfafb9a8b375d3310a8473ddd97610b7ff"
	Jan 09 00:07:36 addons-983119 kubelet[1347]: I0109 00:07:36.057971    1347 scope.go:117] "RemoveContainer" containerID="346220f49bc93784a5d8c976fee6125051ad50889e1e437fe3e733cfc23467bf"
	Jan 09 00:07:36 addons-983119 kubelet[1347]: I0109 00:07:36.060272    1347 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="220059cf-253b-4ce8-8a22-9950d48be27e" path="/var/lib/kubelet/pods/220059cf-253b-4ce8-8a22-9950d48be27e/volumes"
	Jan 09 00:07:36 addons-983119 kubelet[1347]: I0109 00:07:36.061316    1347 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a2420262-f692-45eb-8c14-f9f5dfd7840a" path="/var/lib/kubelet/pods/a2420262-f692-45eb-8c14-f9f5dfd7840a/volumes"
	Jan 09 00:07:36 addons-983119 kubelet[1347]: I0109 00:07:36.061721    1347 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e982d93f-efa5-4dce-8f86-a37be7304a3b" path="/var/lib/kubelet/pods/e982d93f-efa5-4dce-8f86-a37be7304a3b/volumes"
	Jan 09 00:07:37 addons-983119 kubelet[1347]: I0109 00:07:37.108408    1347 scope.go:117] "RemoveContainer" containerID="346220f49bc93784a5d8c976fee6125051ad50889e1e437fe3e733cfc23467bf"
	Jan 09 00:07:37 addons-983119 kubelet[1347]: I0109 00:07:37.108616    1347 scope.go:117] "RemoveContainer" containerID="07f523968424850b2e559b757a27588d47a1c8f0dd1b177f26f8c88c1bbbb30d"
	Jan 09 00:07:37 addons-983119 kubelet[1347]: E0109 00:07:37.108878    1347 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-8c5sk_default(aae1656c-a734-4480-a597-201184b635bc)\"" pod="default/hello-world-app-5d77478584-8c5sk" podUID="aae1656c-a734-4480-a597-201184b635bc"
	Jan 09 00:07:38 addons-983119 kubelet[1347]: I0109 00:07:38.112932    1347 scope.go:117] "RemoveContainer" containerID="15d99037b4888445a40f7bf7d6a98dd3d4fdd9a6f51a5ef0b6eecc47f1326fbb"
	Jan 09 00:07:38 addons-983119 kubelet[1347]: I0109 00:07:38.131159    1347 scope.go:117] "RemoveContainer" containerID="15d99037b4888445a40f7bf7d6a98dd3d4fdd9a6f51a5ef0b6eecc47f1326fbb"
	Jan 09 00:07:38 addons-983119 kubelet[1347]: E0109 00:07:38.131568    1347 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15d99037b4888445a40f7bf7d6a98dd3d4fdd9a6f51a5ef0b6eecc47f1326fbb\": container with ID starting with 15d99037b4888445a40f7bf7d6a98dd3d4fdd9a6f51a5ef0b6eecc47f1326fbb not found: ID does not exist" containerID="15d99037b4888445a40f7bf7d6a98dd3d4fdd9a6f51a5ef0b6eecc47f1326fbb"
	Jan 09 00:07:38 addons-983119 kubelet[1347]: I0109 00:07:38.131616    1347 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15d99037b4888445a40f7bf7d6a98dd3d4fdd9a6f51a5ef0b6eecc47f1326fbb"} err="failed to get container status \"15d99037b4888445a40f7bf7d6a98dd3d4fdd9a6f51a5ef0b6eecc47f1326fbb\": rpc error: code = NotFound desc = could not find container \"15d99037b4888445a40f7bf7d6a98dd3d4fdd9a6f51a5ef0b6eecc47f1326fbb\": container with ID starting with 15d99037b4888445a40f7bf7d6a98dd3d4fdd9a6f51a5ef0b6eecc47f1326fbb not found: ID does not exist"
	Jan 09 00:07:38 addons-983119 kubelet[1347]: I0109 00:07:38.161906    1347 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khh4z\" (UniqueName: \"kubernetes.io/projected/8233e4d8-7555-4921-95ba-f93a031df6a2-kube-api-access-khh4z\") pod \"8233e4d8-7555-4921-95ba-f93a031df6a2\" (UID: \"8233e4d8-7555-4921-95ba-f93a031df6a2\") "
	Jan 09 00:07:38 addons-983119 kubelet[1347]: I0109 00:07:38.161979    1347 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8233e4d8-7555-4921-95ba-f93a031df6a2-webhook-cert\") pod \"8233e4d8-7555-4921-95ba-f93a031df6a2\" (UID: \"8233e4d8-7555-4921-95ba-f93a031df6a2\") "
	Jan 09 00:07:38 addons-983119 kubelet[1347]: I0109 00:07:38.165842    1347 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8233e4d8-7555-4921-95ba-f93a031df6a2-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "8233e4d8-7555-4921-95ba-f93a031df6a2" (UID: "8233e4d8-7555-4921-95ba-f93a031df6a2"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 09 00:07:38 addons-983119 kubelet[1347]: I0109 00:07:38.166387    1347 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8233e4d8-7555-4921-95ba-f93a031df6a2-kube-api-access-khh4z" (OuterVolumeSpecName: "kube-api-access-khh4z") pod "8233e4d8-7555-4921-95ba-f93a031df6a2" (UID: "8233e4d8-7555-4921-95ba-f93a031df6a2"). InnerVolumeSpecName "kube-api-access-khh4z". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 09 00:07:38 addons-983119 kubelet[1347]: I0109 00:07:38.263148    1347 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8233e4d8-7555-4921-95ba-f93a031df6a2-webhook-cert\") on node \"addons-983119\" DevicePath \"\""
	Jan 09 00:07:38 addons-983119 kubelet[1347]: I0109 00:07:38.263190    1347 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-khh4z\" (UniqueName: \"kubernetes.io/projected/8233e4d8-7555-4921-95ba-f93a031df6a2-kube-api-access-khh4z\") on node \"addons-983119\" DevicePath \"\""
	Jan 09 00:07:40 addons-983119 kubelet[1347]: I0109 00:07:40.058742    1347 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8233e4d8-7555-4921-95ba-f93a031df6a2" path="/var/lib/kubelet/pods/8233e4d8-7555-4921-95ba-f93a031df6a2/volumes"
	
	
	==> storage-provisioner [da0aaa3251b2d84fd8dd3659257bfb7c56cb08a1227a20d6ed5d7786ae96f61c] <==
	I0109 00:03:09.054537       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0109 00:03:09.077782       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0109 00:03:09.077964       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0109 00:03:09.113257       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0109 00:03:09.164157       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-983119_3e27554b-c553-4ca9-b1e0-0ea180f13108!
	I0109 00:03:09.180042       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c891f798-694a-4292-a7ef-63f262c03243", APIVersion:"v1", ResourceVersion:"867", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-983119_3e27554b-c553-4ca9-b1e0-0ea180f13108 became leader
	I0109 00:03:09.265067       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-983119_3e27554b-c553-4ca9-b1e0-0ea180f13108!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-983119 -n addons-983119
helpers_test.go:261: (dbg) Run:  kubectl --context addons-983119 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (168.58s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (178.37s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-037418 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-037418 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.679793169s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-037418 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-037418 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f71ead35-e6c4-4531-b660-21fb31cec2be] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f71ead35-e6c4-4531-b660-21fb31cec2be] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.003731307s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-037418 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0109 00:16:55.256845 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
E0109 00:16:55.262372 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
E0109 00:16:55.272682 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
E0109 00:16:55.292959 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
E0109 00:16:55.333291 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
E0109 00:16:55.413630 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
E0109 00:16:55.574066 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
E0109 00:16:55.894580 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
E0109 00:16:56.535111 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
E0109 00:16:57.815319 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
E0109 00:17:00.375547 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
E0109 00:17:05.496682 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
E0109 00:17:15.737820 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-037418 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.292444442s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-037418 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-037418 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E0109 00:17:36.218055 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.011358246s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-037418 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-037418 addons disable ingress-dns --alsologtostderr -v=1: (2.259648758s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-037418 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-037418 addons disable ingress --alsologtostderr -v=1: (7.550508507s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-037418
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-037418:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3371289105c2f4389cbd24ab6918019cebaccca3e8098b4529d005dc860e50a7",
	        "Created": "2024-01-09T00:13:25.924512419Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1711760,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-09T00:13:26.249779077Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a5be0745bf7211988da1521fe4ee64cb5f5dee2ca8e3061f061c5272199c616c",
	        "ResolvConfPath": "/var/lib/docker/containers/3371289105c2f4389cbd24ab6918019cebaccca3e8098b4529d005dc860e50a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3371289105c2f4389cbd24ab6918019cebaccca3e8098b4529d005dc860e50a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/3371289105c2f4389cbd24ab6918019cebaccca3e8098b4529d005dc860e50a7/hosts",
	        "LogPath": "/var/lib/docker/containers/3371289105c2f4389cbd24ab6918019cebaccca3e8098b4529d005dc860e50a7/3371289105c2f4389cbd24ab6918019cebaccca3e8098b4529d005dc860e50a7-json.log",
	        "Name": "/ingress-addon-legacy-037418",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-037418:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-037418",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d6ac01d699cc72ea97c430ce202ba9b5544d1cb636175abda7a26fdc05558c2a-init/diff:/var/lib/docker/overlay2/a443ad727e446e5b332ea48292deac5ef22cb43b6aa42ee65e414679b2407c31/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d6ac01d699cc72ea97c430ce202ba9b5544d1cb636175abda7a26fdc05558c2a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d6ac01d699cc72ea97c430ce202ba9b5544d1cb636175abda7a26fdc05558c2a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d6ac01d699cc72ea97c430ce202ba9b5544d1cb636175abda7a26fdc05558c2a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-037418",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-037418/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-037418",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-037418",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-037418",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9e1122342dfe23c31c386e1fd7c79907de5a568e7727959ffa13977dda6e419a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34384"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34383"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34380"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34382"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34381"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9e1122342dfe",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-037418": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3371289105c2",
	                        "ingress-addon-legacy-037418"
	                    ],
	                    "NetworkID": "d8515096d584a985d5312ec8262ca956157605f14663cdd83e3a5ed8012233e9",
	                    "EndpointID": "ead6ebeb282b7c483d8b55b974903dda580ee3fb603f7324f99b6a4fdf66d1f9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-037418 -n ingress-addon-legacy-037418
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-037418 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-037418 logs -n 25: (1.46378967s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-451422                                                      | functional-451422           | jenkins | v1.32.0 | 09 Jan 24 00:12 UTC | 09 Jan 24 00:12 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-451422                                                      | functional-451422           | jenkins | v1.32.0 | 09 Jan 24 00:12 UTC | 09 Jan 24 00:12 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-451422 image load --daemon                                  | functional-451422           | jenkins | v1.32.0 | 09 Jan 24 00:12 UTC | 09 Jan 24 00:12 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-451422               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-451422 image ls                                             | functional-451422           | jenkins | v1.32.0 | 09 Jan 24 00:12 UTC | 09 Jan 24 00:12 UTC |
	| image          | functional-451422 image save                                           | functional-451422           | jenkins | v1.32.0 | 09 Jan 24 00:12 UTC | 09 Jan 24 00:12 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-451422               |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-451422 image rm                                             | functional-451422           | jenkins | v1.32.0 | 09 Jan 24 00:12 UTC | 09 Jan 24 00:12 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-451422               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-451422 image ls                                             | functional-451422           | jenkins | v1.32.0 | 09 Jan 24 00:12 UTC | 09 Jan 24 00:12 UTC |
	| image          | functional-451422 image load                                           | functional-451422           | jenkins | v1.32.0 | 09 Jan 24 00:12 UTC | 09 Jan 24 00:12 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-451422 image ls                                             | functional-451422           | jenkins | v1.32.0 | 09 Jan 24 00:12 UTC | 09 Jan 24 00:12 UTC |
	| image          | functional-451422 image save --daemon                                  | functional-451422           | jenkins | v1.32.0 | 09 Jan 24 00:12 UTC | 09 Jan 24 00:12 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-451422               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-451422                                                      | functional-451422           | jenkins | v1.32.0 | 09 Jan 24 00:12 UTC | 09 Jan 24 00:12 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-451422                                                      | functional-451422           | jenkins | v1.32.0 | 09 Jan 24 00:12 UTC | 09 Jan 24 00:12 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-451422 ssh pgrep                                            | functional-451422           | jenkins | v1.32.0 | 09 Jan 24 00:12 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-451422                                                      | functional-451422           | jenkins | v1.32.0 | 09 Jan 24 00:12 UTC | 09 Jan 24 00:12 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-451422 image build -t                                       | functional-451422           | jenkins | v1.32.0 | 09 Jan 24 00:12 UTC | 09 Jan 24 00:12 UTC |
	|                | localhost/my-image:functional-451422                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-451422                                                      | functional-451422           | jenkins | v1.32.0 | 09 Jan 24 00:12 UTC | 09 Jan 24 00:12 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-451422 image ls                                             | functional-451422           | jenkins | v1.32.0 | 09 Jan 24 00:12 UTC | 09 Jan 24 00:12 UTC |
	| delete         | -p functional-451422                                                   | functional-451422           | jenkins | v1.32.0 | 09 Jan 24 00:12 UTC | 09 Jan 24 00:13 UTC |
	| start          | -p ingress-addon-legacy-037418                                         | ingress-addon-legacy-037418 | jenkins | v1.32.0 | 09 Jan 24 00:13 UTC | 09 Jan 24 00:14 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-037418                                            | ingress-addon-legacy-037418 | jenkins | v1.32.0 | 09 Jan 24 00:14 UTC | 09 Jan 24 00:14 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-037418                                            | ingress-addon-legacy-037418 | jenkins | v1.32.0 | 09 Jan 24 00:14 UTC | 09 Jan 24 00:14 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-037418                                            | ingress-addon-legacy-037418 | jenkins | v1.32.0 | 09 Jan 24 00:15 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-037418 ip                                         | ingress-addon-legacy-037418 | jenkins | v1.32.0 | 09 Jan 24 00:17 UTC | 09 Jan 24 00:17 UTC |
	| addons         | ingress-addon-legacy-037418                                            | ingress-addon-legacy-037418 | jenkins | v1.32.0 | 09 Jan 24 00:17 UTC | 09 Jan 24 00:17 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-037418                                            | ingress-addon-legacy-037418 | jenkins | v1.32.0 | 09 Jan 24 00:17 UTC | 09 Jan 24 00:17 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/09 00:13:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0109 00:13:01.542019 1711298 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:13:01.542200 1711298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:13:01.542221 1711298 out.go:309] Setting ErrFile to fd 2...
	I0109 00:13:01.542240 1711298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:13:01.542556 1711298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-1678586/.minikube/bin
	I0109 00:13:01.543021 1711298 out.go:303] Setting JSON to false
	I0109 00:13:01.543885 1711298 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":24924,"bootTime":1704734258,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0109 00:13:01.544016 1711298 start.go:138] virtualization:  
	I0109 00:13:01.549077 1711298 out.go:177] * [ingress-addon-legacy-037418] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0109 00:13:01.551455 1711298 out.go:177]   - MINIKUBE_LOCATION=17830
	I0109 00:13:01.553600 1711298 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:13:01.551540 1711298 notify.go:220] Checking for updates...
	I0109 00:13:01.555845 1711298 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-1678586/kubeconfig
	I0109 00:13:01.558098 1711298 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-1678586/.minikube
	I0109 00:13:01.559943 1711298 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0109 00:13:01.561863 1711298 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0109 00:13:01.564266 1711298 driver.go:392] Setting default libvirt URI to qemu:///system
	I0109 00:13:01.589910 1711298 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0109 00:13:01.590048 1711298 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0109 00:13:01.678240 1711298 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-09 00:13:01.668485829 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0109 00:13:01.678344 1711298 docker.go:295] overlay module found
	I0109 00:13:01.681007 1711298 out.go:177] * Using the docker driver based on user configuration
	I0109 00:13:01.684291 1711298 start.go:298] selected driver: docker
	I0109 00:13:01.684310 1711298 start.go:902] validating driver "docker" against <nil>
	I0109 00:13:01.684331 1711298 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0109 00:13:01.684949 1711298 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0109 00:13:01.746909 1711298 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-09 00:13:01.737596394 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0109 00:13:01.747082 1711298 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0109 00:13:01.747336 1711298 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0109 00:13:01.750049 1711298 out.go:177] * Using Docker driver with root privileges
	I0109 00:13:01.752406 1711298 cni.go:84] Creating CNI manager for ""
	I0109 00:13:01.752426 1711298 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0109 00:13:01.752438 1711298 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0109 00:13:01.752455 1711298 start_flags.go:323] config:
	{Name:ingress-addon-legacy-037418 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-037418 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:13:01.755147 1711298 out.go:177] * Starting control plane node ingress-addon-legacy-037418 in cluster ingress-addon-legacy-037418
	I0109 00:13:01.757426 1711298 cache.go:121] Beginning downloading kic base image for docker with crio
	I0109 00:13:01.759703 1711298 out.go:177] * Pulling base image v0.0.42-1704751654-17830 ...
	I0109 00:13:01.762144 1711298 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0109 00:13:01.762235 1711298 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon
	I0109 00:13:01.779569 1711298 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon, skipping pull
	I0109 00:13:01.779597 1711298 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 exists in daemon, skipping load
	I0109 00:13:01.821660 1711298 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0109 00:13:01.821685 1711298 cache.go:56] Caching tarball of preloaded images
	I0109 00:13:01.821851 1711298 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0109 00:13:01.824278 1711298 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0109 00:13:01.826088 1711298 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0109 00:13:01.942574 1711298 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0109 00:13:18.178529 1711298 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0109 00:13:18.178642 1711298 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0109 00:13:19.363820 1711298 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0109 00:13:19.364202 1711298 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/config.json ...
	I0109 00:13:19.364238 1711298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/config.json: {Name:mkeee1bbd77a915fb0d30ddae3d37d618c8d71df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:13:19.364422 1711298 cache.go:194] Successfully downloaded all kic artifacts
	I0109 00:13:19.364485 1711298 start.go:365] acquiring machines lock for ingress-addon-legacy-037418: {Name:mk2f405c752213dc90d8d8695c1a8713c82df115 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:13:19.364543 1711298 start.go:369] acquired machines lock for "ingress-addon-legacy-037418" in 43.824µs
	I0109 00:13:19.364568 1711298 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-037418 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-037418 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:13:19.364638 1711298 start.go:125] createHost starting for "" (driver="docker")
	I0109 00:13:19.367244 1711298 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0109 00:13:19.367492 1711298 start.go:159] libmachine.API.Create for "ingress-addon-legacy-037418" (driver="docker")
	I0109 00:13:19.367543 1711298 client.go:168] LocalClient.Create starting
	I0109 00:13:19.367621 1711298 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem
	I0109 00:13:19.367661 1711298 main.go:141] libmachine: Decoding PEM data...
	I0109 00:13:19.367680 1711298 main.go:141] libmachine: Parsing certificate...
	I0109 00:13:19.367739 1711298 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem
	I0109 00:13:19.367763 1711298 main.go:141] libmachine: Decoding PEM data...
	I0109 00:13:19.367778 1711298 main.go:141] libmachine: Parsing certificate...
	I0109 00:13:19.368147 1711298 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-037418 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0109 00:13:19.384886 1711298 cli_runner.go:211] docker network inspect ingress-addon-legacy-037418 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0109 00:13:19.384983 1711298 network_create.go:281] running [docker network inspect ingress-addon-legacy-037418] to gather additional debugging logs...
	I0109 00:13:19.385005 1711298 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-037418
	W0109 00:13:19.401451 1711298 cli_runner.go:211] docker network inspect ingress-addon-legacy-037418 returned with exit code 1
	I0109 00:13:19.401485 1711298 network_create.go:284] error running [docker network inspect ingress-addon-legacy-037418]: docker network inspect ingress-addon-legacy-037418: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-037418 not found
	I0109 00:13:19.401499 1711298 network_create.go:286] output of [docker network inspect ingress-addon-legacy-037418]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-037418 not found
	
	** /stderr **
	I0109 00:13:19.401603 1711298 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0109 00:13:19.418392 1711298 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400000e300}
	I0109 00:13:19.418429 1711298 network_create.go:124] attempt to create docker network ingress-addon-legacy-037418 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0109 00:13:19.418502 1711298 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-037418 ingress-addon-legacy-037418
	I0109 00:13:19.491888 1711298 network_create.go:108] docker network ingress-addon-legacy-037418 192.168.49.0/24 created
	I0109 00:13:19.491924 1711298 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-037418" container
	I0109 00:13:19.491998 1711298 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0109 00:13:19.509899 1711298 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-037418 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-037418 --label created_by.minikube.sigs.k8s.io=true
	I0109 00:13:19.527926 1711298 oci.go:103] Successfully created a docker volume ingress-addon-legacy-037418
	I0109 00:13:19.528012 1711298 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-037418-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-037418 --entrypoint /usr/bin/test -v ingress-addon-legacy-037418:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -d /var/lib
	I0109 00:13:21.024923 1711298 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-037418-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-037418 --entrypoint /usr/bin/test -v ingress-addon-legacy-037418:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -d /var/lib: (1.496868211s)
	I0109 00:13:21.024958 1711298 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-037418
	I0109 00:13:21.024976 1711298 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0109 00:13:21.024997 1711298 kic.go:194] Starting extracting preloaded images to volume ...
	I0109 00:13:21.025086 1711298 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-037418:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -I lz4 -xf /preloaded.tar -C /extractDir
	I0109 00:13:25.841223 1711298 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-037418:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -I lz4 -xf /preloaded.tar -C /extractDir: (4.816093104s)
	I0109 00:13:25.841256 1711298 kic.go:203] duration metric: took 4.816256 seconds to extract preloaded images to volume
	W0109 00:13:25.841402 1711298 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0109 00:13:25.841512 1711298 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0109 00:13:25.909014 1711298 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-037418 --name ingress-addon-legacy-037418 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-037418 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-037418 --network ingress-addon-legacy-037418 --ip 192.168.49.2 --volume ingress-addon-legacy-037418:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617
	I0109 00:13:26.258293 1711298 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-037418 --format={{.State.Running}}
	I0109 00:13:26.289964 1711298 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-037418 --format={{.State.Status}}
	I0109 00:13:26.314999 1711298 cli_runner.go:164] Run: docker exec ingress-addon-legacy-037418 stat /var/lib/dpkg/alternatives/iptables
	I0109 00:13:26.382067 1711298 oci.go:144] the created container "ingress-addon-legacy-037418" has a running status.
	I0109 00:13:26.382095 1711298 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/ingress-addon-legacy-037418/id_rsa...
	I0109 00:13:26.550791 1711298 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/ingress-addon-legacy-037418/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0109 00:13:26.550841 1711298 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/ingress-addon-legacy-037418/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0109 00:13:26.580289 1711298 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-037418 --format={{.State.Status}}
	I0109 00:13:26.617146 1711298 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0109 00:13:26.617174 1711298 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-037418 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0109 00:13:26.682563 1711298 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-037418 --format={{.State.Status}}
	I0109 00:13:26.705665 1711298 machine.go:88] provisioning docker machine ...
	I0109 00:13:26.705705 1711298 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-037418"
	I0109 00:13:26.705769 1711298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-037418
	I0109 00:13:26.736404 1711298 main.go:141] libmachine: Using SSH client type: native
	I0109 00:13:26.738655 1711298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34384 <nil> <nil>}
	I0109 00:13:26.738685 1711298 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-037418 && echo "ingress-addon-legacy-037418" | sudo tee /etc/hostname
	I0109 00:13:26.739193 1711298 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39780->127.0.0.1:34384: read: connection reset by peer
	I0109 00:13:29.904711 1711298 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-037418
	
	I0109 00:13:29.904800 1711298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-037418
	I0109 00:13:29.923484 1711298 main.go:141] libmachine: Using SSH client type: native
	I0109 00:13:29.923883 1711298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34384 <nil> <nil>}
	I0109 00:13:29.923910 1711298 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-037418' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-037418/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-037418' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:13:30.072312 1711298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:13:30.073610 1711298 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17830-1678586/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-1678586/.minikube}
	I0109 00:13:30.073673 1711298 ubuntu.go:177] setting up certificates
	I0109 00:13:30.073689 1711298 provision.go:83] configureAuth start
	I0109 00:13:30.073770 1711298 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-037418
	I0109 00:13:30.093613 1711298 provision.go:138] copyHostCerts
	I0109 00:13:30.093662 1711298 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem
	I0109 00:13:30.093699 1711298 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem, removing ...
	I0109 00:13:30.093714 1711298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem
	I0109 00:13:30.093799 1711298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem (1082 bytes)
	I0109 00:13:30.093900 1711298 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem
	I0109 00:13:30.093924 1711298 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem, removing ...
	I0109 00:13:30.093929 1711298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem
	I0109 00:13:30.093958 1711298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem (1123 bytes)
	I0109 00:13:30.094059 1711298 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem
	I0109 00:13:30.094082 1711298 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem, removing ...
	I0109 00:13:30.094090 1711298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem
	I0109 00:13:30.094118 1711298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem (1679 bytes)
	I0109 00:13:30.094174 1711298 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-037418 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-037418]
	I0109 00:13:30.293933 1711298 provision.go:172] copyRemoteCerts
	I0109 00:13:30.294026 1711298 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:13:30.294073 1711298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-037418
	I0109 00:13:30.312013 1711298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34384 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/ingress-addon-legacy-037418/id_rsa Username:docker}
	I0109 00:13:30.416967 1711298 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0109 00:13:30.417035 1711298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:13:30.445474 1711298 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0109 00:13:30.445537 1711298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0109 00:13:30.474220 1711298 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0109 00:13:30.474284 1711298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0109 00:13:30.503027 1711298 provision.go:86] duration metric: configureAuth took 429.323387ms
	I0109 00:13:30.503058 1711298 ubuntu.go:193] setting minikube options for container-runtime
	I0109 00:13:30.503252 1711298 config.go:182] Loaded profile config "ingress-addon-legacy-037418": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0109 00:13:30.503362 1711298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-037418
	I0109 00:13:30.520780 1711298 main.go:141] libmachine: Using SSH client type: native
	I0109 00:13:30.521199 1711298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34384 <nil> <nil>}
	I0109 00:13:30.521220 1711298 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:13:30.805658 1711298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:13:30.805684 1711298 machine.go:91] provisioned docker machine in 4.099994703s
	I0109 00:13:30.805695 1711298 client.go:171] LocalClient.Create took 11.438142554s
	I0109 00:13:30.805706 1711298 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-037418" took 11.438215991s
	I0109 00:13:30.805761 1711298 start.go:300] post-start starting for "ingress-addon-legacy-037418" (driver="docker")
	I0109 00:13:30.805774 1711298 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:13:30.805861 1711298 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:13:30.805932 1711298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-037418
	I0109 00:13:30.830675 1711298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34384 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/ingress-addon-legacy-037418/id_rsa Username:docker}
	I0109 00:13:30.937433 1711298 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:13:30.941537 1711298 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0109 00:13:30.941577 1711298 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0109 00:13:30.941588 1711298 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0109 00:13:30.941596 1711298 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0109 00:13:30.941607 1711298 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-1678586/.minikube/addons for local assets ...
	I0109 00:13:30.941675 1711298 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-1678586/.minikube/files for local assets ...
	I0109 00:13:30.941777 1711298 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem -> 16839672.pem in /etc/ssl/certs
	I0109 00:13:30.941789 1711298 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem -> /etc/ssl/certs/16839672.pem
	I0109 00:13:30.941906 1711298 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:13:30.952261 1711298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem --> /etc/ssl/certs/16839672.pem (1708 bytes)
	I0109 00:13:30.981153 1711298 start.go:303] post-start completed in 175.375083ms
	I0109 00:13:30.981548 1711298 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-037418
	I0109 00:13:30.999877 1711298 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/config.json ...
	I0109 00:13:31.000169 1711298 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0109 00:13:31.000223 1711298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-037418
	I0109 00:13:31.018066 1711298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34384 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/ingress-addon-legacy-037418/id_rsa Username:docker}
	I0109 00:13:31.116647 1711298 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0109 00:13:31.122368 1711298 start.go:128] duration metric: createHost completed in 11.757715122s
	I0109 00:13:31.122392 1711298 start.go:83] releasing machines lock for "ingress-addon-legacy-037418", held for 11.757835541s
	I0109 00:13:31.122476 1711298 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-037418
	I0109 00:13:31.140721 1711298 ssh_runner.go:195] Run: cat /version.json
	I0109 00:13:31.140741 1711298 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:13:31.140774 1711298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-037418
	I0109 00:13:31.140798 1711298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-037418
	I0109 00:13:31.162336 1711298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34384 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/ingress-addon-legacy-037418/id_rsa Username:docker}
	I0109 00:13:31.162918 1711298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34384 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/ingress-addon-legacy-037418/id_rsa Username:docker}
	I0109 00:13:31.409018 1711298 ssh_runner.go:195] Run: systemctl --version
	I0109 00:13:31.414773 1711298 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:13:31.562221 1711298 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0109 00:13:31.567749 1711298 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:13:31.591473 1711298 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0109 00:13:31.591593 1711298 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:13:31.631620 1711298 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0109 00:13:31.631644 1711298 start.go:475] detecting cgroup driver to use...
	I0109 00:13:31.631676 1711298 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0109 00:13:31.631733 1711298 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:13:31.650567 1711298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:13:31.664053 1711298 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:13:31.664115 1711298 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:13:31.680186 1711298 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:13:31.696781 1711298 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0109 00:13:31.804701 1711298 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:13:31.911079 1711298 docker.go:219] disabling docker service ...
	I0109 00:13:31.911149 1711298 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:13:31.932669 1711298 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:13:31.947612 1711298 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:13:32.055282 1711298 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:13:32.154334 1711298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:13:32.168215 1711298 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:13:32.188303 1711298 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0109 00:13:32.188419 1711298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:13:32.200857 1711298 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0109 00:13:32.200974 1711298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:13:32.212505 1711298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:13:32.224288 1711298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:13:32.235809 1711298 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:13:32.246536 1711298 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:13:32.256507 1711298 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:13:32.266938 1711298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:13:32.360944 1711298 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0109 00:13:32.491296 1711298 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0109 00:13:32.491435 1711298 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0109 00:13:32.496244 1711298 start.go:543] Will wait 60s for crictl version
	I0109 00:13:32.496335 1711298 ssh_runner.go:195] Run: which crictl
	I0109 00:13:32.501154 1711298 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:13:32.547751 1711298 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0109 00:13:32.547892 1711298 ssh_runner.go:195] Run: crio --version
	I0109 00:13:32.592409 1711298 ssh_runner.go:195] Run: crio --version
	I0109 00:13:32.636817 1711298 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0109 00:13:32.638971 1711298 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-037418 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0109 00:13:32.656093 1711298 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0109 00:13:32.660648 1711298 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:13:32.673843 1711298 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0109 00:13:32.673909 1711298 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:13:32.727349 1711298 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0109 00:13:32.727420 1711298 ssh_runner.go:195] Run: which lz4
	I0109 00:13:32.731836 1711298 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0109 00:13:32.731937 1711298 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0109 00:13:32.736175 1711298 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0109 00:13:32.736207 1711298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I0109 00:13:34.893848 1711298 crio.go:444] Took 2.161948 seconds to copy over tarball
	I0109 00:13:34.893994 1711298 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0109 00:13:37.555210 1711298 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.661170035s)
	I0109 00:13:37.555242 1711298 crio.go:451] Took 2.661328 seconds to extract the tarball
	I0109 00:13:37.555253 1711298 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0109 00:13:37.641153 1711298 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:13:37.679958 1711298 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0109 00:13:37.679981 1711298 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0109 00:13:37.680041 1711298 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0109 00:13:37.680076 1711298 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:13:37.680247 1711298 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0109 00:13:37.680254 1711298 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0109 00:13:37.680337 1711298 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0109 00:13:37.680342 1711298 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0109 00:13:37.680407 1711298 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0109 00:13:37.680490 1711298 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0109 00:13:37.681906 1711298 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0109 00:13:37.682259 1711298 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0109 00:13:37.682460 1711298 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0109 00:13:37.682619 1711298 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0109 00:13:37.682663 1711298 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0109 00:13:37.682738 1711298 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0109 00:13:37.682885 1711298 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0109 00:13:37.682932 1711298 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	W0109 00:13:38.014969 1711298 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0109 00:13:38.015228 1711298 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0109 00:13:38.066867 1711298 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0109 00:13:38.066932 1711298 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0109 00:13:38.066988 1711298 ssh_runner.go:195] Run: which crictl
	W0109 00:13:38.068979 1711298 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0109 00:13:38.069207 1711298 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0109 00:13:38.072744 1711298 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	W0109 00:13:38.073261 1711298 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0109 00:13:38.073414 1711298 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W0109 00:13:38.080367 1711298 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0109 00:13:38.080626 1711298 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W0109 00:13:38.080904 1711298 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0109 00:13:38.081304 1711298 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W0109 00:13:38.091445 1711298 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0109 00:13:38.091695 1711298 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0109 00:13:38.091963 1711298 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W0109 00:13:38.196545 1711298 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0109 00:13:38.196707 1711298 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:13:38.219864 1711298 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0109 00:13:38.219962 1711298 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0109 00:13:38.220042 1711298 ssh_runner.go:195] Run: which crictl
	I0109 00:13:38.234528 1711298 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0109 00:13:38.234660 1711298 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0109 00:13:38.234717 1711298 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0109 00:13:38.234789 1711298 ssh_runner.go:195] Run: which crictl
	I0109 00:13:38.303238 1711298 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0109 00:13:38.303330 1711298 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0109 00:13:38.303418 1711298 ssh_runner.go:195] Run: which crictl
	I0109 00:13:38.303525 1711298 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0109 00:13:38.303578 1711298 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0109 00:13:38.303617 1711298 ssh_runner.go:195] Run: which crictl
	I0109 00:13:38.303712 1711298 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0109 00:13:38.303758 1711298 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0109 00:13:38.303819 1711298 ssh_runner.go:195] Run: which crictl
	I0109 00:13:38.303926 1711298 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0109 00:13:38.303971 1711298 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0109 00:13:38.304010 1711298 ssh_runner.go:195] Run: which crictl
	I0109 00:13:38.435129 1711298 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0109 00:13:38.435191 1711298 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:13:38.435246 1711298 ssh_runner.go:195] Run: which crictl
	I0109 00:13:38.435298 1711298 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0109 00:13:38.435325 1711298 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0109 00:13:38.435417 1711298 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0109 00:13:38.435451 1711298 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0109 00:13:38.435481 1711298 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0109 00:13:38.435507 1711298 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0109 00:13:38.590869 1711298 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0109 00:13:38.590932 1711298 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0109 00:13:38.590975 1711298 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0109 00:13:38.591012 1711298 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0109 00:13:38.592962 1711298 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0109 00:13:38.595905 1711298 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0109 00:13:38.595971 1711298 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:13:38.660457 1711298 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0109 00:13:38.660557 1711298 cache_images.go:92] LoadImages completed in 980.561825ms
	W0109 00:13:38.660657 1711298 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	I0109 00:13:38.660747 1711298 ssh_runner.go:195] Run: crio config
	I0109 00:13:38.722181 1711298 cni.go:84] Creating CNI manager for ""
	I0109 00:13:38.722204 1711298 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0109 00:13:38.722229 1711298 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:13:38.722277 1711298 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-037418 NodeName:ingress-addon-legacy-037418 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0109 00:13:38.722464 1711298 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-037418"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:13:38.722545 1711298 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-037418 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-037418 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:13:38.722623 1711298 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0109 00:13:38.733462 1711298 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:13:38.733588 1711298 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:13:38.744444 1711298 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0109 00:13:38.765627 1711298 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0109 00:13:38.787148 1711298 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0109 00:13:38.808229 1711298 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0109 00:13:38.812719 1711298 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:13:38.825889 1711298 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418 for IP: 192.168.49.2
	I0109 00:13:38.825929 1711298 certs.go:190] acquiring lock for shared ca certs: {Name:mkd1a8a8c523b20f31a5839efb0f14edb2634692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:13:38.826071 1711298 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.key
	I0109 00:13:38.826124 1711298 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-1678586/.minikube/proxy-client-ca.key
	I0109 00:13:38.826180 1711298 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.key
	I0109 00:13:38.826196 1711298 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt with IP's: []
	I0109 00:13:39.343738 1711298 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt ...
	I0109 00:13:39.343773 1711298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: {Name:mk789f01baee5fe836cf6b5550187857ba7e4355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:13:39.343976 1711298 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.key ...
	I0109 00:13:39.343990 1711298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.key: {Name:mkedede18e08f735277afb7c62a7710a959f0afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:13:39.344074 1711298 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/apiserver.key.dd3b5fb2
	I0109 00:13:39.344085 1711298 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0109 00:13:40.251449 1711298 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/apiserver.crt.dd3b5fb2 ...
	I0109 00:13:40.251485 1711298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/apiserver.crt.dd3b5fb2: {Name:mk70ff904b474866d7160dccc4e9814d59d62d9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:13:40.251669 1711298 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/apiserver.key.dd3b5fb2 ...
	I0109 00:13:40.251683 1711298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/apiserver.key.dd3b5fb2: {Name:mk23bd65f7e5509110be7111d4b6c00ca9beb09a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:13:40.251758 1711298 certs.go:337] copying /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/apiserver.crt
	I0109 00:13:40.251839 1711298 certs.go:341] copying /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/apiserver.key
	I0109 00:13:40.251900 1711298 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/proxy-client.key
	I0109 00:13:40.251920 1711298 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/proxy-client.crt with IP's: []
	I0109 00:13:41.118341 1711298 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/proxy-client.crt ...
	I0109 00:13:41.118373 1711298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/proxy-client.crt: {Name:mkd0035cf85bf45b21217a6670c05136f203dd15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:13:41.118565 1711298 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/proxy-client.key ...
	I0109 00:13:41.118580 1711298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/proxy-client.key: {Name:mk7e464b63a19d76016a59af59062f6b512c008f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:13:41.118668 1711298 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0109 00:13:41.118691 1711298 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0109 00:13:41.118703 1711298 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0109 00:13:41.118718 1711298 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0109 00:13:41.118728 1711298 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0109 00:13:41.118744 1711298 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0109 00:13:41.118758 1711298 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0109 00:13:41.118769 1711298 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0109 00:13:41.118828 1711298 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/1683967.pem (1338 bytes)
	W0109 00:13:41.118867 1711298 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/1683967_empty.pem, impossibly tiny 0 bytes
	I0109 00:13:41.118881 1711298 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca-key.pem (1679 bytes)
	I0109 00:13:41.118906 1711298 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem (1082 bytes)
	I0109 00:13:41.118949 1711298 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem (1123 bytes)
	I0109 00:13:41.118979 1711298 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem (1679 bytes)
	I0109 00:13:41.119033 1711298 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem (1708 bytes)
	I0109 00:13:41.119066 1711298 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/1683967.pem -> /usr/share/ca-certificates/1683967.pem
	I0109 00:13:41.119084 1711298 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem -> /usr/share/ca-certificates/16839672.pem
	I0109 00:13:41.119098 1711298 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:13:41.119676 1711298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:13:41.148257 1711298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0109 00:13:41.176643 1711298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:13:41.205211 1711298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0109 00:13:41.233035 1711298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:13:41.261014 1711298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0109 00:13:41.288525 1711298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:13:41.316303 1711298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:13:41.343628 1711298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/1683967.pem --> /usr/share/ca-certificates/1683967.pem (1338 bytes)
	I0109 00:13:41.370912 1711298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem --> /usr/share/ca-certificates/16839672.pem (1708 bytes)
	I0109 00:13:41.398855 1711298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:13:41.426210 1711298 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:13:41.447114 1711298 ssh_runner.go:195] Run: openssl version
	I0109 00:13:41.453777 1711298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16839672.pem && ln -fs /usr/share/ca-certificates/16839672.pem /etc/ssl/certs/16839672.pem"
	I0109 00:13:41.465312 1711298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16839672.pem
	I0109 00:13:41.469963 1711298 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  9 00:09 /usr/share/ca-certificates/16839672.pem
	I0109 00:13:41.470026 1711298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16839672.pem
	I0109 00:13:41.478420 1711298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16839672.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:13:41.489897 1711298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:13:41.501190 1711298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:13:41.505769 1711298 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  9 00:02 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:13:41.505850 1711298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:13:41.514600 1711298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:13:41.525686 1711298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1683967.pem && ln -fs /usr/share/ca-certificates/1683967.pem /etc/ssl/certs/1683967.pem"
	I0109 00:13:41.537080 1711298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1683967.pem
	I0109 00:13:41.541598 1711298 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  9 00:09 /usr/share/ca-certificates/1683967.pem
	I0109 00:13:41.541687 1711298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1683967.pem
	I0109 00:13:41.550646 1711298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1683967.pem /etc/ssl/certs/51391683.0"
	I0109 00:13:41.561984 1711298 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:13:41.566407 1711298 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0109 00:13:41.566476 1711298 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-037418 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-037418 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:13:41.566554 1711298 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0109 00:13:41.566612 1711298 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:13:41.608190 1711298 cri.go:89] found id: ""
	I0109 00:13:41.608273 1711298 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:13:41.618730 1711298 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:13:41.629548 1711298 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0109 00:13:41.629691 1711298 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:13:41.640178 1711298 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:13:41.640269 1711298 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0109 00:13:41.695457 1711298 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0109 00:13:41.695736 1711298 kubeadm.go:322] [preflight] Running pre-flight checks
	I0109 00:13:41.746257 1711298 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0109 00:13:41.746371 1711298 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0109 00:13:41.746461 1711298 kubeadm.go:322] OS: Linux
	I0109 00:13:41.746534 1711298 kubeadm.go:322] CGROUPS_CPU: enabled
	I0109 00:13:41.746608 1711298 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0109 00:13:41.746696 1711298 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0109 00:13:41.746767 1711298 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0109 00:13:41.746836 1711298 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0109 00:13:41.746911 1711298 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0109 00:13:41.841735 1711298 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0109 00:13:41.841847 1711298 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0109 00:13:41.841953 1711298 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0109 00:13:42.064827 1711298 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:13:42.066420 1711298 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:13:42.066501 1711298 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0109 00:13:42.162954 1711298 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0109 00:13:42.167528 1711298 out.go:204]   - Generating certificates and keys ...
	I0109 00:13:42.167637 1711298 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0109 00:13:42.167707 1711298 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0109 00:13:42.428250 1711298 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0109 00:13:42.688165 1711298 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0109 00:13:43.161025 1711298 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0109 00:13:43.948228 1711298 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0109 00:13:44.828984 1711298 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0109 00:13:44.829355 1711298 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-037418 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0109 00:13:45.416911 1711298 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0109 00:13:45.417234 1711298 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-037418 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0109 00:13:45.915863 1711298 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0109 00:13:46.612696 1711298 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0109 00:13:48.048036 1711298 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0109 00:13:48.048348 1711298 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0109 00:13:48.418828 1711298 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0109 00:13:49.162225 1711298 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0109 00:13:49.538983 1711298 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0109 00:13:50.134550 1711298 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0109 00:13:50.135368 1711298 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0109 00:13:50.137779 1711298 out.go:204]   - Booting up control plane ...
	I0109 00:13:50.137915 1711298 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0109 00:13:50.150863 1711298 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0109 00:13:50.152455 1711298 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0109 00:13:50.153478 1711298 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0109 00:13:50.156222 1711298 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0109 00:14:01.658611 1711298 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.502342 seconds
	I0109 00:14:01.658733 1711298 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0109 00:14:01.674204 1711298 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0109 00:14:02.194792 1711298 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0109 00:14:02.194947 1711298 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-037418 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0109 00:14:02.706305 1711298 kubeadm.go:322] [bootstrap-token] Using token: 4aan7q.hlau6q7fopu5ne2l
	I0109 00:14:02.708546 1711298 out.go:204]   - Configuring RBAC rules ...
	I0109 00:14:02.708682 1711298 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0109 00:14:02.717070 1711298 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0109 00:14:02.728199 1711298 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0109 00:14:02.736830 1711298 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0109 00:14:02.743211 1711298 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0109 00:14:02.749018 1711298 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0109 00:14:02.760602 1711298 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0109 00:14:03.068907 1711298 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0109 00:14:03.146183 1711298 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0109 00:14:03.147740 1711298 kubeadm.go:322] 
	I0109 00:14:03.147815 1711298 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0109 00:14:03.147828 1711298 kubeadm.go:322] 
	I0109 00:14:03.147904 1711298 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0109 00:14:03.147913 1711298 kubeadm.go:322] 
	I0109 00:14:03.147938 1711298 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0109 00:14:03.147997 1711298 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0109 00:14:03.148048 1711298 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0109 00:14:03.148056 1711298 kubeadm.go:322] 
	I0109 00:14:03.148105 1711298 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0109 00:14:03.148180 1711298 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0109 00:14:03.148247 1711298 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0109 00:14:03.148256 1711298 kubeadm.go:322] 
	I0109 00:14:03.148335 1711298 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0109 00:14:03.148410 1711298 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0109 00:14:03.148418 1711298 kubeadm.go:322] 
	I0109 00:14:03.148497 1711298 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 4aan7q.hlau6q7fopu5ne2l \
	I0109 00:14:03.148600 1711298 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2f5d2b90e0873ecdcc03ee1f37a9ff73145aa86994d578f7f9f8008617cee046 \
	I0109 00:14:03.148626 1711298 kubeadm.go:322]     --control-plane 
	I0109 00:14:03.148634 1711298 kubeadm.go:322] 
	I0109 00:14:03.148751 1711298 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0109 00:14:03.148773 1711298 kubeadm.go:322] 
	I0109 00:14:03.148861 1711298 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 4aan7q.hlau6q7fopu5ne2l \
	I0109 00:14:03.148966 1711298 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2f5d2b90e0873ecdcc03ee1f37a9ff73145aa86994d578f7f9f8008617cee046 
	I0109 00:14:03.152025 1711298 kubeadm.go:322] W0109 00:13:41.694542    1222 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0109 00:14:03.152234 1711298 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0109 00:14:03.152341 1711298 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0109 00:14:03.152471 1711298 kubeadm.go:322] W0109 00:13:50.150614    1222 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0109 00:14:03.152593 1711298 kubeadm.go:322] W0109 00:13:50.152294    1222 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0109 00:14:03.152611 1711298 cni.go:84] Creating CNI manager for ""
	I0109 00:14:03.152623 1711298 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0109 00:14:03.156060 1711298 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0109 00:14:03.158206 1711298 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0109 00:14:03.164691 1711298 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0109 00:14:03.164719 1711298 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0109 00:14:03.200213 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0109 00:14:03.672716 1711298 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0109 00:14:03.672860 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:03.672931 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=ingress-addon-legacy-037418 minikube.k8s.io/updated_at=2024_01_09T00_14_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:03.801918 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:03.801980 1711298 ops.go:34] apiserver oom_adj: -16
	I0109 00:14:04.302528 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:04.802264 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:05.302722 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:05.802721 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:06.302742 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:06.802700 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:07.302813 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:07.802354 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:08.303059 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:08.802737 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:09.302289 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:09.802065 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:10.302709 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:10.802742 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:11.302975 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:11.802045 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:12.302604 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:12.802102 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:13.302779 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:13.802431 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:14.302240 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:14.802695 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:15.302021 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:15.802111 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:16.301995 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:16.802065 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:17.302316 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:17.802548 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:18.302227 1711298 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:14:18.417724 1711298 kubeadm.go:1088] duration metric: took 14.744909543s to wait for elevateKubeSystemPrivileges.
	I0109 00:14:18.417768 1711298 kubeadm.go:406] StartCluster complete in 36.851288182s
	I0109 00:14:18.417786 1711298 settings.go:142] acquiring lock: {Name:mk0f4be07809726b91ed42aaaa2120516a2004e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:14:18.417863 1711298 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-1678586/kubeconfig
	I0109 00:14:18.418648 1711298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/kubeconfig: {Name:mkd692fadb6f1e94cc8cf2ddbb66429fa6c0e8fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:14:18.419417 1711298 kapi.go:59] client config for ingress-addon-legacy-037418: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.key", CAFile:"/home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0109 00:14:18.421053 1711298 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0109 00:14:18.421143 1711298 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-037418"
	I0109 00:14:18.421158 1711298 addons.go:237] Setting addon storage-provisioner=true in "ingress-addon-legacy-037418"
	I0109 00:14:18.421209 1711298 host.go:66] Checking if "ingress-addon-legacy-037418" exists ...
	I0109 00:14:18.421735 1711298 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-037418 --format={{.State.Status}}
	I0109 00:14:18.421880 1711298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0109 00:14:18.422068 1711298 cert_rotation.go:137] Starting client certificate rotation controller
	I0109 00:14:18.422341 1711298 config.go:182] Loaded profile config "ingress-addon-legacy-037418": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0109 00:14:18.422545 1711298 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-037418"
	I0109 00:14:18.422575 1711298 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-037418"
	I0109 00:14:18.422896 1711298 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-037418 --format={{.State.Status}}
	I0109 00:14:18.466506 1711298 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:14:18.468952 1711298 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:14:18.468975 1711298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0109 00:14:18.469055 1711298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-037418
	I0109 00:14:18.470166 1711298 kapi.go:59] client config for ingress-addon-legacy-037418: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.key", CAFile:"/home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0109 00:14:18.470428 1711298 addons.go:237] Setting addon default-storageclass=true in "ingress-addon-legacy-037418"
	I0109 00:14:18.470474 1711298 host.go:66] Checking if "ingress-addon-legacy-037418" exists ...
	I0109 00:14:18.471011 1711298 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-037418 --format={{.State.Status}}
	I0109 00:14:18.527356 1711298 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0109 00:14:18.527385 1711298 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0109 00:14:18.527448 1711298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-037418
	I0109 00:14:18.529384 1711298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34384 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/ingress-addon-legacy-037418/id_rsa Username:docker}
	I0109 00:14:18.563361 1711298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34384 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/ingress-addon-legacy-037418/id_rsa Username:docker}
	I0109 00:14:18.668947 1711298 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0109 00:14:18.762839 1711298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:14:18.786628 1711298 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0109 00:14:18.955105 1711298 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-037418" context rescaled to 1 replicas
	I0109 00:14:18.955200 1711298 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:14:18.959676 1711298 out.go:177] * Verifying Kubernetes components...
	I0109 00:14:18.962857 1711298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:14:19.067407 1711298 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0109 00:14:19.197035 1711298 kapi.go:59] client config for ingress-addon-legacy-037418: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.key", CAFile:"/home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0109 00:14:19.197487 1711298 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-037418" to be "Ready" ...
	I0109 00:14:19.206423 1711298 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0109 00:14:19.208807 1711298 addons.go:508] enable addons completed in 787.744954ms: enabled=[storage-provisioner default-storageclass]
	I0109 00:14:21.200422 1711298 node_ready.go:58] node "ingress-addon-legacy-037418" has status "Ready":"False"
	I0109 00:14:23.700817 1711298 node_ready.go:58] node "ingress-addon-legacy-037418" has status "Ready":"False"
	I0109 00:14:25.701443 1711298 node_ready.go:58] node "ingress-addon-legacy-037418" has status "Ready":"False"
	I0109 00:14:26.700625 1711298 node_ready.go:49] node "ingress-addon-legacy-037418" has status "Ready":"True"
	I0109 00:14:26.700651 1711298 node_ready.go:38] duration metric: took 7.503116735s waiting for node "ingress-addon-legacy-037418" to be "Ready" ...
	I0109 00:14:26.700660 1711298 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:14:26.714023 1711298 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-4vddf" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:28.718220 1711298 pod_ready.go:102] pod "coredns-66bff467f8-4vddf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-09 00:14:18 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0109 00:14:31.217414 1711298 pod_ready.go:102] pod "coredns-66bff467f8-4vddf" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-09 00:14:18 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0109 00:14:33.220044 1711298 pod_ready.go:102] pod "coredns-66bff467f8-4vddf" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:35.719954 1711298 pod_ready.go:102] pod "coredns-66bff467f8-4vddf" in "kube-system" namespace has status "Ready":"False"
	I0109 00:14:37.219187 1711298 pod_ready.go:92] pod "coredns-66bff467f8-4vddf" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:37.219216 1711298 pod_ready.go:81] duration metric: took 10.505157118s waiting for pod "coredns-66bff467f8-4vddf" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:37.219228 1711298 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-037418" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:37.223410 1711298 pod_ready.go:92] pod "etcd-ingress-addon-legacy-037418" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:37.223436 1711298 pod_ready.go:81] duration metric: took 4.201112ms waiting for pod "etcd-ingress-addon-legacy-037418" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:37.223450 1711298 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-037418" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:37.227811 1711298 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-037418" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:37.227836 1711298 pod_ready.go:81] duration metric: took 4.378763ms waiting for pod "kube-apiserver-ingress-addon-legacy-037418" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:37.227847 1711298 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-037418" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:37.232230 1711298 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-037418" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:37.232256 1711298 pod_ready.go:81] duration metric: took 4.401319ms waiting for pod "kube-controller-manager-ingress-addon-legacy-037418" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:37.232266 1711298 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-njbp6" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:37.236744 1711298 pod_ready.go:92] pod "kube-proxy-njbp6" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:37.236768 1711298 pod_ready.go:81] duration metric: took 4.494759ms waiting for pod "kube-proxy-njbp6" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:37.236778 1711298 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-037418" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:37.415148 1711298 request.go:629] Waited for 178.269022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-037418
	I0109 00:14:37.615132 1711298 request.go:629] Waited for 197.310881ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-037418
	I0109 00:14:37.618034 1711298 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-037418" in "kube-system" namespace has status "Ready":"True"
	I0109 00:14:37.618058 1711298 pod_ready.go:81] duration metric: took 381.272261ms waiting for pod "kube-scheduler-ingress-addon-legacy-037418" in "kube-system" namespace to be "Ready" ...
	I0109 00:14:37.618071 1711298 pod_ready.go:38] duration metric: took 10.917394387s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:14:37.618085 1711298 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:14:37.618151 1711298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:14:37.630957 1711298 api_server.go:72] duration metric: took 18.675710109s to wait for apiserver process to appear ...
	I0109 00:14:37.630983 1711298 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:14:37.631003 1711298 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0109 00:14:37.640109 1711298 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0109 00:14:37.641057 1711298 api_server.go:141] control plane version: v1.18.20
	I0109 00:14:37.641083 1711298 api_server.go:131] duration metric: took 10.092242ms to wait for apiserver health ...
	I0109 00:14:37.641091 1711298 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:14:37.814410 1711298 request.go:629] Waited for 173.254561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0109 00:14:37.820489 1711298 system_pods.go:59] 8 kube-system pods found
	I0109 00:14:37.820527 1711298 system_pods.go:61] "coredns-66bff467f8-4vddf" [e49e4eb1-fbec-4d75-8c16-fc020ec45098] Running
	I0109 00:14:37.820535 1711298 system_pods.go:61] "etcd-ingress-addon-legacy-037418" [c76116a6-a4e4-496d-a6b2-a42e28ace30a] Running
	I0109 00:14:37.820540 1711298 system_pods.go:61] "kindnet-p578w" [d71283cb-5fa4-44f6-bb29-942abffe97bd] Running
	I0109 00:14:37.820546 1711298 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-037418" [93a3b416-7de9-48d0-ac98-4e39309eed13] Running
	I0109 00:14:37.820553 1711298 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-037418" [f8eab7e0-ab55-4699-9c95-7c5915517286] Running
	I0109 00:14:37.820558 1711298 system_pods.go:61] "kube-proxy-njbp6" [85a675fb-5102-4bba-a060-7089a42303ff] Running
	I0109 00:14:37.820563 1711298 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-037418" [1524fbee-7760-446a-83d4-ff2c7f24184e] Running
	I0109 00:14:37.820574 1711298 system_pods.go:61] "storage-provisioner" [bf3779f7-cce0-4ec9-bd50-98c073e05358] Running
	I0109 00:14:37.820583 1711298 system_pods.go:74] duration metric: took 179.486362ms to wait for pod list to return data ...
	I0109 00:14:37.820591 1711298 default_sa.go:34] waiting for default service account to be created ...
	I0109 00:14:38.015008 1711298 request.go:629] Waited for 194.337824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0109 00:14:38.017429 1711298 default_sa.go:45] found service account: "default"
	I0109 00:14:38.017458 1711298 default_sa.go:55] duration metric: took 196.857593ms for default service account to be created ...
	I0109 00:14:38.017468 1711298 system_pods.go:116] waiting for k8s-apps to be running ...
	I0109 00:14:38.214853 1711298 request.go:629] Waited for 197.300747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0109 00:14:38.220598 1711298 system_pods.go:86] 8 kube-system pods found
	I0109 00:14:38.220633 1711298 system_pods.go:89] "coredns-66bff467f8-4vddf" [e49e4eb1-fbec-4d75-8c16-fc020ec45098] Running
	I0109 00:14:38.220641 1711298 system_pods.go:89] "etcd-ingress-addon-legacy-037418" [c76116a6-a4e4-496d-a6b2-a42e28ace30a] Running
	I0109 00:14:38.220648 1711298 system_pods.go:89] "kindnet-p578w" [d71283cb-5fa4-44f6-bb29-942abffe97bd] Running
	I0109 00:14:38.220653 1711298 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-037418" [93a3b416-7de9-48d0-ac98-4e39309eed13] Running
	I0109 00:14:38.220659 1711298 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-037418" [f8eab7e0-ab55-4699-9c95-7c5915517286] Running
	I0109 00:14:38.220663 1711298 system_pods.go:89] "kube-proxy-njbp6" [85a675fb-5102-4bba-a060-7089a42303ff] Running
	I0109 00:14:38.220668 1711298 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-037418" [1524fbee-7760-446a-83d4-ff2c7f24184e] Running
	I0109 00:14:38.220673 1711298 system_pods.go:89] "storage-provisioner" [bf3779f7-cce0-4ec9-bd50-98c073e05358] Running
	I0109 00:14:38.220680 1711298 system_pods.go:126] duration metric: took 203.206449ms to wait for k8s-apps to be running ...
	I0109 00:14:38.220693 1711298 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:14:38.220756 1711298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:14:38.234427 1711298 system_svc.go:56] duration metric: took 13.722526ms WaitForService to wait for kubelet.
	I0109 00:14:38.234510 1711298 kubeadm.go:581] duration metric: took 19.279269037s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:14:38.234545 1711298 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:14:38.414907 1711298 request.go:629] Waited for 180.29418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0109 00:14:38.417806 1711298 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0109 00:14:38.417842 1711298 node_conditions.go:123] node cpu capacity is 2
	I0109 00:14:38.417854 1711298 node_conditions.go:105] duration metric: took 183.303323ms to run NodePressure ...
	I0109 00:14:38.417867 1711298 start.go:228] waiting for startup goroutines ...
	I0109 00:14:38.417873 1711298 start.go:233] waiting for cluster config update ...
	I0109 00:14:38.417883 1711298 start.go:242] writing updated cluster config ...
	I0109 00:14:38.418183 1711298 ssh_runner.go:195] Run: rm -f paused
	I0109 00:14:38.478544 1711298 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I0109 00:14:38.481533 1711298 out.go:177] 
	W0109 00:14:38.483757 1711298 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0109 00:14:38.486022 1711298 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0109 00:14:38.488556 1711298 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-037418" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 09 00:17:39 ingress-addon-legacy-037418 crio[894]: time="2024-01-09 00:17:39.581898315Z" level=info msg="Created container 3a98d98c7af93975e159a5dacc84ba2a44525e67cf9eba3c06f36e032b643755: default/hello-world-app-5f5d8b66bb-csf29/hello-world-app" id=ede0e60b-1e87-4527-9f73-29a85c91f408 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jan 09 00:17:39 ingress-addon-legacy-037418 crio[894]: time="2024-01-09 00:17:39.582825330Z" level=info msg="Starting container: 3a98d98c7af93975e159a5dacc84ba2a44525e67cf9eba3c06f36e032b643755" id=feec706a-aad5-4dfd-bd42-50d1ba971fc8 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jan 09 00:17:39 ingress-addon-legacy-037418 conmon[3653]: conmon 3a98d98c7af93975e159 <ninfo>: container 3664 exited with status 1
	Jan 09 00:17:39 ingress-addon-legacy-037418 crio[894]: time="2024-01-09 00:17:39.596497377Z" level=info msg="Started container" PID=3664 containerID=3a98d98c7af93975e159a5dacc84ba2a44525e67cf9eba3c06f36e032b643755 description=default/hello-world-app-5f5d8b66bb-csf29/hello-world-app id=feec706a-aad5-4dfd-bd42-50d1ba971fc8 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=c5abc6792b4e29d1b372c6133661d11d43e7dc82350f7abdbdaf0b6441bb7bd4
	Jan 09 00:17:40 ingress-addon-legacy-037418 crio[894]: time="2024-01-09 00:17:40.087512790Z" level=info msg="Stopping container: 3f75be87060d5753e3e057065d2266e7f0fddb31b2c0602281ed227cf4992431 (timeout: 2s)" id=3734d74f-8c33-4a74-81b8-35c01677e23a name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 09 00:17:40 ingress-addon-legacy-037418 crio[894]: time="2024-01-09 00:17:40.101482587Z" level=info msg="Stopping container: 3f75be87060d5753e3e057065d2266e7f0fddb31b2c0602281ed227cf4992431 (timeout: 2s)" id=0ed5baef-f219-4c02-bc14-8c0a840cc975 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 09 00:17:40 ingress-addon-legacy-037418 crio[894]: time="2024-01-09 00:17:40.141983081Z" level=info msg="Removing container: df55d28600963d24528a4636581f3f5fe37a2fe7ce755f8c688aef5a2a043efd" id=4b984bdf-d434-4fea-980b-3a9747ce4364 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jan 09 00:17:40 ingress-addon-legacy-037418 crio[894]: time="2024-01-09 00:17:40.167408167Z" level=info msg="Removed container df55d28600963d24528a4636581f3f5fe37a2fe7ce755f8c688aef5a2a043efd: default/hello-world-app-5f5d8b66bb-csf29/hello-world-app" id=4b984bdf-d434-4fea-980b-3a9747ce4364 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jan 09 00:17:40 ingress-addon-legacy-037418 crio[894]: time="2024-01-09 00:17:40.500754862Z" level=info msg="Stopping pod sandbox: d1196690460bd9ce977798268df09f996cea9182b86efffd1dd31af84b23cf64" id=d8a4de91-8e90-430e-9048-c542c32edbcd name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 09 00:17:40 ingress-addon-legacy-037418 crio[894]: time="2024-01-09 00:17:40.500804733Z" level=info msg="Stopped pod sandbox (already stopped): d1196690460bd9ce977798268df09f996cea9182b86efffd1dd31af84b23cf64" id=d8a4de91-8e90-430e-9048-c542c32edbcd name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 09 00:17:42 ingress-addon-legacy-037418 crio[894]: time="2024-01-09 00:17:42.103987244Z" level=warning msg="Stopping container 3f75be87060d5753e3e057065d2266e7f0fddb31b2c0602281ed227cf4992431 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=3734d74f-8c33-4a74-81b8-35c01677e23a name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 09 00:17:42 ingress-addon-legacy-037418 conmon[2737]: conmon 3f75be87060d5753e3e0 <ninfo>: container 2748 exited with status 137
	Jan 09 00:17:42 ingress-addon-legacy-037418 crio[894]: time="2024-01-09 00:17:42.268900046Z" level=info msg="Stopped container 3f75be87060d5753e3e057065d2266e7f0fddb31b2c0602281ed227cf4992431: ingress-nginx/ingress-nginx-controller-7fcf777cb7-dbvsf/controller" id=0ed5baef-f219-4c02-bc14-8c0a840cc975 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 09 00:17:42 ingress-addon-legacy-037418 crio[894]: time="2024-01-09 00:17:42.268931538Z" level=info msg="Stopped container 3f75be87060d5753e3e057065d2266e7f0fddb31b2c0602281ed227cf4992431: ingress-nginx/ingress-nginx-controller-7fcf777cb7-dbvsf/controller" id=3734d74f-8c33-4a74-81b8-35c01677e23a name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 09 00:17:42 ingress-addon-legacy-037418 crio[894]: time="2024-01-09 00:17:42.269610658Z" level=info msg="Stopping pod sandbox: ca9238e7cbf6afb3de4a746c6b46b5593f9a1fd22ddfd5980a80c1f372fd803a" id=f42ed15c-9851-4e77-b4a6-856e5188d24e name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 09 00:17:42 ingress-addon-legacy-037418 crio[894]: time="2024-01-09 00:17:42.271598860Z" level=info msg="Stopping pod sandbox: ca9238e7cbf6afb3de4a746c6b46b5593f9a1fd22ddfd5980a80c1f372fd803a" id=72dc3154-016d-4786-9436-75f88e21cf2a name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 09 00:17:42 ingress-addon-legacy-037418 crio[894]: time="2024-01-09 00:17:42.273639600Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-BS2KXXVYGU7KEKI2 - [0:0]\n:KUBE-HP-M23QUQ3JZFOZJUWI - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-BS2KXXVYGU7KEKI2\n-X KUBE-HP-M23QUQ3JZFOZJUWI\nCOMMIT\n"
	Jan 09 00:17:42 ingress-addon-legacy-037418 crio[894]: time="2024-01-09 00:17:42.275317941Z" level=info msg="Closing host port tcp:80"
	Jan 09 00:17:42 ingress-addon-legacy-037418 crio[894]: time="2024-01-09 00:17:42.275372530Z" level=info msg="Closing host port tcp:443"
	Jan 09 00:17:42 ingress-addon-legacy-037418 crio[894]: time="2024-01-09 00:17:42.276629764Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jan 09 00:17:42 ingress-addon-legacy-037418 crio[894]: time="2024-01-09 00:17:42.276655134Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jan 09 00:17:42 ingress-addon-legacy-037418 crio[894]: time="2024-01-09 00:17:42.276802204Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-dbvsf Namespace:ingress-nginx ID:ca9238e7cbf6afb3de4a746c6b46b5593f9a1fd22ddfd5980a80c1f372fd803a UID:a0e96d08-6e4f-4edb-8eaf-6589a6d22fe6 NetNS:/var/run/netns/885019e5-dff5-48fc-8f3a-e26170c4557d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 09 00:17:42 ingress-addon-legacy-037418 crio[894]: time="2024-01-09 00:17:42.276948142Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-dbvsf from CNI network \"kindnet\" (type=ptp)"
	Jan 09 00:17:42 ingress-addon-legacy-037418 crio[894]: time="2024-01-09 00:17:42.308034645Z" level=info msg="Stopped pod sandbox: ca9238e7cbf6afb3de4a746c6b46b5593f9a1fd22ddfd5980a80c1f372fd803a" id=f42ed15c-9851-4e77-b4a6-856e5188d24e name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 09 00:17:42 ingress-addon-legacy-037418 crio[894]: time="2024-01-09 00:17:42.308177563Z" level=info msg="Stopped pod sandbox (already stopped): ca9238e7cbf6afb3de4a746c6b46b5593f9a1fd22ddfd5980a80c1f372fd803a" id=72dc3154-016d-4786-9436-75f88e21cf2a name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3a98d98c7af93       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                   8 seconds ago       Exited              hello-world-app           2                   c5abc6792b4e2       hello-world-app-5f5d8b66bb-csf29
	d125f0a02aa53       docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb                    2 minutes ago       Running             nginx                     0                   dab10f02aebf3       nginx
	3f75be87060d5       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   ca9238e7cbf6a       ingress-nginx-controller-7fcf777cb7-dbvsf
	0a8cd17f96e3a       a883f7fc35610a84d589cbb450eade9face1d1a8b2cbdafa1690cbffe68cfe88                                                   3 minutes ago       Exited              patch                     1                   57e85388a342a       ingress-nginx-admission-patch-7m4r5
	21c7388960425       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   50bf3b4ec4556       ingress-nginx-admission-create-2sq95
	48d291ba00663       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   49539facd5dc1       coredns-66bff467f8-4vddf
	567671397e2b7       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   a34b08b2a2f67       storage-provisioner
	a2bc3fbe5b560       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   50518e83a45de       kindnet-p578w
	b05248f176353       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   5ceefbd0c751d       kube-proxy-njbp6
	aa4378e7b904b       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   3 minutes ago       Running             kube-apiserver            0                   5e55876687964       kube-apiserver-ingress-addon-legacy-037418
	89ff617b3b18c       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   3 minutes ago       Running             kube-controller-manager   0                   a1d543a51dcac       kube-controller-manager-ingress-addon-legacy-037418
	3bbc761019d0a       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   3 minutes ago       Running             etcd                      0                   4341ebf2caf8f       etcd-ingress-addon-legacy-037418
	a30e455ba389f       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   3 minutes ago       Running             kube-scheduler            0                   3b83f463ffbd8       kube-scheduler-ingress-addon-legacy-037418
	
	
	==> coredns [48d291ba006632f4e90183e22b4722038f377106ec7af3158d4fb187354ef7a7] <==
	[INFO] 10.244.0.5:48780 - 49773 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002163458s
	[INFO] 10.244.0.5:41756 - 20763 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000200182s
	[INFO] 10.244.0.5:48780 - 42630 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000107061s
	[INFO] 10.244.0.5:41756 - 30296 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000053333s
	[INFO] 10.244.0.5:41756 - 22221 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001246017s
	[INFO] 10.244.0.5:41756 - 32100 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001045228s
	[INFO] 10.244.0.5:41756 - 13177 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000051677s
	[INFO] 10.244.0.5:34971 - 47724 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00007571s
	[INFO] 10.244.0.5:45666 - 52009 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000087508s
	[INFO] 10.244.0.5:45666 - 3514 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000095558s
	[INFO] 10.244.0.5:34971 - 2322 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000079016s
	[INFO] 10.244.0.5:34971 - 35813 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037268s
	[INFO] 10.244.0.5:34971 - 57514 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037473s
	[INFO] 10.244.0.5:34971 - 49122 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043012s
	[INFO] 10.244.0.5:34971 - 53802 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003218s
	[INFO] 10.244.0.5:45666 - 48753 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000124506s
	[INFO] 10.244.0.5:45666 - 24172 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000041371s
	[INFO] 10.244.0.5:34971 - 35 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001583432s
	[INFO] 10.244.0.5:45666 - 31032 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000136773s
	[INFO] 10.244.0.5:34971 - 41749 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00121976s
	[INFO] 10.244.0.5:34971 - 43834 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000040436s
	[INFO] 10.244.0.5:45666 - 41786 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00005875s
	[INFO] 10.244.0.5:45666 - 39194 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000990745s
	[INFO] 10.244.0.5:45666 - 50134 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.007355667s
	[INFO] 10.244.0.5:45666 - 23244 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000073707s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-037418
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-037418
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=ingress-addon-legacy-037418
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_09T00_14_03_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jan 2024 00:14:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-037418
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jan 2024 00:17:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jan 2024 00:17:36 +0000   Tue, 09 Jan 2024 00:13:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jan 2024 00:17:36 +0000   Tue, 09 Jan 2024 00:13:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jan 2024 00:17:36 +0000   Tue, 09 Jan 2024 00:13:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jan 2024 00:17:36 +0000   Tue, 09 Jan 2024 00:14:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-037418
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 893516f6b7a54ec3b5f380f4b5349d44
	  System UUID:                147495af-1b60-4d80-b0da-cb3e1ca93670
	  Boot ID:                    9a753e90-64b1-452a-8e10-9b878947801f
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-csf29                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 coredns-66bff467f8-4vddf                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m30s
	  kube-system                 etcd-ingress-addon-legacy-037418                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 kindnet-p578w                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m30s
	  kube-system                 kube-apiserver-ingress-addon-legacy-037418             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-037418    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 kube-proxy-njbp6                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 kube-scheduler-ingress-addon-legacy-037418             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m56s (x4 over 3m56s)  kubelet     Node ingress-addon-legacy-037418 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m56s (x4 over 3m56s)  kubelet     Node ingress-addon-legacy-037418 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m56s (x3 over 3m56s)  kubelet     Node ingress-addon-legacy-037418 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m42s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m42s                  kubelet     Node ingress-addon-legacy-037418 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m42s                  kubelet     Node ingress-addon-legacy-037418 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m42s                  kubelet     Node ingress-addon-legacy-037418 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m28s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m22s                  kubelet     Node ingress-addon-legacy-037418 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001079] FS-Cache: O-key=[8] '2f76ed0000000000'
	[  +0.000720] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001023] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=0000000009be8c6c
	[  +0.001112] FS-Cache: N-key=[8] '2f76ed0000000000'
	[  +0.010607] FS-Cache: Duplicate cookie detected
	[  +0.000806] FS-Cache: O-cookie c=00000060 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001106] FS-Cache: O-cookie d=000000001df03bef{9p.inode} n=00000000b45aa7e6
	[  +0.001139] FS-Cache: O-key=[8] '2f76ed0000000000'
	[  +0.000750] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001054] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=00000000e9f33a46
	[  +0.001189] FS-Cache: N-key=[8] '2f76ed0000000000'
	[  +2.185619] FS-Cache: Duplicate cookie detected
	[  +0.000751] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.001094] FS-Cache: O-cookie d=000000001df03bef{9p.inode} n=00000000ed3c59a1
	[  +0.001046] FS-Cache: O-key=[8] '2e76ed0000000000'
	[  +0.000727] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000928] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=00000000f2938397
	[  +0.001085] FS-Cache: N-key=[8] '2e76ed0000000000'
	[  +0.397498] FS-Cache: Duplicate cookie detected
	[  +0.000731] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001010] FS-Cache: O-cookie d=000000001df03bef{9p.inode} n=000000005629db1e
	[  +0.001140] FS-Cache: O-key=[8] '3476ed0000000000'
	[  +0.000717] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.000990] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=0000000009be8c6c
	[  +0.001266] FS-Cache: N-key=[8] '3476ed0000000000'
	
	
	==> etcd [3bbc761019d0a5377759864e5fdd73e1bf7b42f7070f44fbb2f46b8482db146d] <==
	raft2024/01/09 00:13:55 INFO: aec36adc501070cc became follower at term 0
	raft2024/01/09 00:13:55 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2024/01/09 00:13:55 INFO: aec36adc501070cc became follower at term 1
	raft2024/01/09 00:13:55 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-09 00:13:55.760638 W | auth: simple token is not cryptographically signed
	2024-01-09 00:13:55.819252 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	raft2024/01/09 00:13:55 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-09 00:13:55.942712 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2024-01-09 00:13:55.942886 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2024-01-09 00:13:56.025393 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-09 00:13:56.025695 I | embed: listening for peers on 192.168.49.2:2380
	2024-01-09 00:13:56.025942 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/01/09 00:13:56 INFO: aec36adc501070cc is starting a new election at term 1
	raft2024/01/09 00:13:56 INFO: aec36adc501070cc became candidate at term 2
	raft2024/01/09 00:13:56 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2024/01/09 00:13:56 INFO: aec36adc501070cc became leader at term 2
	raft2024/01/09 00:13:56 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2024-01-09 00:13:56.315099 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-09 00:13:56.315838 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-09 00:13:56.315945 I | etcdserver/api: enabled capabilities for version 3.4
	2024-01-09 00:13:56.316020 I | etcdserver: published {Name:ingress-addon-legacy-037418 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2024-01-09 00:13:56.316060 I | embed: ready to serve client requests
	2024-01-09 00:13:56.317956 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-09 00:13:56.319253 I | embed: ready to serve client requests
	2024-01-09 00:13:56.321301 I | embed: serving client requests on 192.168.49.2:2379
	
	
	==> kernel <==
	 00:17:48 up  7:00,  0 users,  load average: 0.37, 0.91, 1.86
	Linux ingress-addon-legacy-037418 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [a2bc3fbe5b560e857b07f7117a81fdb62036fedf99d588fb56ee18352efda3b5] <==
	I0109 00:15:41.085111       1 main.go:227] handling current node
	I0109 00:15:51.088608       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0109 00:15:51.088639       1 main.go:227] handling current node
	I0109 00:16:01.097449       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0109 00:16:01.097477       1 main.go:227] handling current node
	I0109 00:16:11.100970       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0109 00:16:11.100997       1 main.go:227] handling current node
	I0109 00:16:21.104735       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0109 00:16:21.104768       1 main.go:227] handling current node
	I0109 00:16:31.113489       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0109 00:16:31.113519       1 main.go:227] handling current node
	I0109 00:16:41.125281       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0109 00:16:41.125313       1 main.go:227] handling current node
	I0109 00:16:51.128555       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0109 00:16:51.128584       1 main.go:227] handling current node
	I0109 00:17:01.140751       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0109 00:17:01.140779       1 main.go:227] handling current node
	I0109 00:17:11.144501       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0109 00:17:11.144529       1 main.go:227] handling current node
	I0109 00:17:21.155365       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0109 00:17:21.155399       1 main.go:227] handling current node
	I0109 00:17:31.166308       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0109 00:17:31.166337       1 main.go:227] handling current node
	I0109 00:17:41.175116       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0109 00:17:41.175148       1 main.go:227] handling current node
	
	
	==> kube-apiserver [aa4378e7b904b3ff1b11c99c8d3d71578fa6fa3b073435519093c44764204ff9] <==
	I0109 00:14:00.171112       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	E0109 00:14:00.302062       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0109 00:14:00.364733       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0109 00:14:00.366133       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0109 00:14:00.370616       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0109 00:14:00.386037       1 cache.go:39] Caches are synced for autoregister controller
	I0109 00:14:00.386752       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0109 00:14:01.163456       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0109 00:14:01.163594       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0109 00:14:01.171755       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0109 00:14:01.175463       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0109 00:14:01.175484       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0109 00:14:01.554766       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0109 00:14:01.595995       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0109 00:14:01.693790       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0109 00:14:01.694776       1 controller.go:609] quota admission added evaluator for: endpoints
	I0109 00:14:01.698116       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0109 00:14:02.549291       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0109 00:14:03.032252       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0109 00:14:03.119646       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0109 00:14:06.447555       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0109 00:14:17.983913       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0109 00:14:18.288964       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0109 00:14:39.358745       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0109 00:15:00.808366       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	
	==> kube-controller-manager [89ff617b3b18cd3d7042654849858c59d4b966ce3b3963f683032451c534aa7c] <==
	I0109 00:14:18.298940       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"f3b8ca3c-e4df-49f3-88ca-8a2773d597fa", APIVersion:"apps/v1", ResourceVersion:"214", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-njbp6
	I0109 00:14:18.298975       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"4cf70690-b340-458a-8482-a5f9a710e782", APIVersion:"apps/v1", ResourceVersion:"230", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-p578w
	I0109 00:14:18.319733       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0109 00:14:18.350121       1 shared_informer.go:230] Caches are synced for PV protection 
	I0109 00:14:18.362511       1 shared_informer.go:230] Caches are synced for attach detach 
	I0109 00:14:18.407829       1 shared_informer.go:230] Caches are synced for expand 
	I0109 00:14:18.529175       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I0109 00:14:18.535716       1 shared_informer.go:230] Caches are synced for resource quota 
	I0109 00:14:18.552467       1 shared_informer.go:230] Caches are synced for disruption 
	I0109 00:14:18.552488       1 disruption.go:339] Sending events to api server.
	I0109 00:14:18.552557       1 shared_informer.go:230] Caches are synced for resource quota 
	I0109 00:14:18.562630       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0109 00:14:18.562732       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0109 00:14:18.582027       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"a2fd3891-fb68-4cdc-8b01-5bd788de12b5", APIVersion:"apps/v1", ResourceVersion:"365", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0109 00:14:18.601274       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0109 00:14:18.674866       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"7493f1d5-26af-417b-9e39-85add5f1b7b1", APIVersion:"apps/v1", ResourceVersion:"366", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-ctzw6
	I0109 00:14:28.241065       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0109 00:14:39.339674       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"49ad2fec-d8de-4ced-8855-77bf8d0d65af", APIVersion:"apps/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0109 00:14:39.360492       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"3f9cee9a-e8c1-4ca3-9eb2-2bc3102de9ec", APIVersion:"apps/v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-dbvsf
	I0109 00:14:39.383827       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"de35e342-d812-4f4c-a415-311c75af185b", APIVersion:"batch/v1", ResourceVersion:"483", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-2sq95
	I0109 00:14:39.430632       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"12299749-5ca9-48c0-b676-1943aad9e4a3", APIVersion:"batch/v1", ResourceVersion:"493", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-7m4r5
	I0109 00:14:41.825116       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"de35e342-d812-4f4c-a415-311c75af185b", APIVersion:"batch/v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0109 00:14:42.855024       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"12299749-5ca9-48c0-b676-1943aad9e4a3", APIVersion:"batch/v1", ResourceVersion:"502", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0109 00:17:21.515700       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"fe20329f-605f-40dc-9e53-396537d93f0a", APIVersion:"apps/v1", ResourceVersion:"717", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0109 00:17:21.538278       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"81a6cf32-df7a-4083-9a48-9edef3f2672d", APIVersion:"apps/v1", ResourceVersion:"718", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-csf29
	
	
	==> kube-proxy [b05248f1763531829bbfe31d7d10d913e362d5fe76182d3fc1f3847015598dfd] <==
	W0109 00:14:20.720590       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0109 00:14:20.733229       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0109 00:14:20.733276       1 server_others.go:186] Using iptables Proxier.
	I0109 00:14:20.733622       1 server.go:583] Version: v1.18.20
	I0109 00:14:20.735827       1 config.go:315] Starting service config controller
	I0109 00:14:20.737928       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0109 00:14:20.736029       1 config.go:133] Starting endpoints config controller
	I0109 00:14:20.742910       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0109 00:14:20.838163       1 shared_informer.go:230] Caches are synced for service config 
	I0109 00:14:20.843089       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [a30e455ba389fe366a47344053208ad271c99deed13fbb79b172bbdcbdfcec21] <==
	I0109 00:14:00.319155       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0109 00:14:00.319180       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0109 00:14:00.321346       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0109 00:14:00.321469       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0109 00:14:00.321486       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0109 00:14:00.321508       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0109 00:14:00.338734       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0109 00:14:00.339058       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0109 00:14:00.339195       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0109 00:14:00.339306       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0109 00:14:00.339407       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0109 00:14:00.339517       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0109 00:14:00.339601       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0109 00:14:00.342591       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0109 00:14:00.342677       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0109 00:14:00.342737       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0109 00:14:00.342790       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0109 00:14:00.342838       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0109 00:14:01.169702       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0109 00:14:01.180875       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0109 00:14:01.203143       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0109 00:14:01.322961       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0109 00:14:01.342556       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0109 00:14:01.621596       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0109 00:14:19.206833       1 factory.go:503] pod: kube-system/storage-provisioner is already present in the active queue
	
	
	==> kubelet <==
	Jan 09 00:17:26 ingress-addon-legacy-037418 kubelet[1614]: I0109 00:17:26.116667    1614 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9a64e32d323187511d5da1a00f10cf514a2aacde5b9acf78b195e71b5e2ac622
	Jan 09 00:17:26 ingress-addon-legacy-037418 kubelet[1614]: I0109 00:17:26.116803    1614 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: df55d28600963d24528a4636581f3f5fe37a2fe7ce755f8c688aef5a2a043efd
	Jan 09 00:17:26 ingress-addon-legacy-037418 kubelet[1614]: E0109 00:17:26.117025    1614 pod_workers.go:191] Error syncing pod 6fe3ed93-4ee0-43bd-9b90-fa4d287b1511 ("hello-world-app-5f5d8b66bb-csf29_default(6fe3ed93-4ee0-43bd-9b90-fa4d287b1511)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-csf29_default(6fe3ed93-4ee0-43bd-9b90-fa4d287b1511)"
	Jan 09 00:17:27 ingress-addon-legacy-037418 kubelet[1614]: I0109 00:17:27.119449    1614 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: df55d28600963d24528a4636581f3f5fe37a2fe7ce755f8c688aef5a2a043efd
	Jan 09 00:17:27 ingress-addon-legacy-037418 kubelet[1614]: E0109 00:17:27.119718    1614 pod_workers.go:191] Error syncing pod 6fe3ed93-4ee0-43bd-9b90-fa4d287b1511 ("hello-world-app-5f5d8b66bb-csf29_default(6fe3ed93-4ee0-43bd-9b90-fa4d287b1511)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-csf29_default(6fe3ed93-4ee0-43bd-9b90-fa4d287b1511)"
	Jan 09 00:17:30 ingress-addon-legacy-037418 kubelet[1614]: E0109 00:17:30.501705    1614 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 09 00:17:30 ingress-addon-legacy-037418 kubelet[1614]: E0109 00:17:30.501750    1614 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 09 00:17:30 ingress-addon-legacy-037418 kubelet[1614]: E0109 00:17:30.501799    1614 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 09 00:17:30 ingress-addon-legacy-037418 kubelet[1614]: E0109 00:17:30.501835    1614 pod_workers.go:191] Error syncing pod 6b6c75de-6927-4f71-823b-93b68a4417ff ("kube-ingress-dns-minikube_kube-system(6b6c75de-6927-4f71-823b-93b68a4417ff)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jan 09 00:17:37 ingress-addon-legacy-037418 kubelet[1614]: I0109 00:17:37.511863    1614 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-x6tzs" (UniqueName: "kubernetes.io/secret/6b6c75de-6927-4f71-823b-93b68a4417ff-minikube-ingress-dns-token-x6tzs") pod "6b6c75de-6927-4f71-823b-93b68a4417ff" (UID: "6b6c75de-6927-4f71-823b-93b68a4417ff")
	Jan 09 00:17:37 ingress-addon-legacy-037418 kubelet[1614]: I0109 00:17:37.518461    1614 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b6c75de-6927-4f71-823b-93b68a4417ff-minikube-ingress-dns-token-x6tzs" (OuterVolumeSpecName: "minikube-ingress-dns-token-x6tzs") pod "6b6c75de-6927-4f71-823b-93b68a4417ff" (UID: "6b6c75de-6927-4f71-823b-93b68a4417ff"). InnerVolumeSpecName "minikube-ingress-dns-token-x6tzs". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 09 00:17:37 ingress-addon-legacy-037418 kubelet[1614]: I0109 00:17:37.612209    1614 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-x6tzs" (UniqueName: "kubernetes.io/secret/6b6c75de-6927-4f71-823b-93b68a4417ff-minikube-ingress-dns-token-x6tzs") on node "ingress-addon-legacy-037418" DevicePath ""
	Jan 09 00:17:39 ingress-addon-legacy-037418 kubelet[1614]: I0109 00:17:39.500705    1614 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: df55d28600963d24528a4636581f3f5fe37a2fe7ce755f8c688aef5a2a043efd
	Jan 09 00:17:40 ingress-addon-legacy-037418 kubelet[1614]: E0109 00:17:40.093655    1614 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-dbvsf.17a885b25df6c95c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-dbvsf", UID:"a0e96d08-6e4f-4edb-8eaf-6589a6d22fe6", APIVersion:"v1", ResourceVersion:"484", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-037418"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15f4209052d615c, ext:217116259036, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15f4209052d615c, ext:217116259036, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-dbvsf.17a885b25df6c95c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 09 00:17:40 ingress-addon-legacy-037418 kubelet[1614]: E0109 00:17:40.106951    1614 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-dbvsf.17a885b25df6c95c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-dbvsf", UID:"a0e96d08-6e4f-4edb-8eaf-6589a6d22fe6", APIVersion:"v1", ResourceVersion:"484", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-037418"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15f4209052d615c, ext:217116259036, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15f420906032970, ext:217130269425, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-dbvsf.17a885b25df6c95c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 09 00:17:40 ingress-addon-legacy-037418 kubelet[1614]: I0109 00:17:40.140010    1614 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: df55d28600963d24528a4636581f3f5fe37a2fe7ce755f8c688aef5a2a043efd
	Jan 09 00:17:40 ingress-addon-legacy-037418 kubelet[1614]: I0109 00:17:40.140260    1614 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 3a98d98c7af93975e159a5dacc84ba2a44525e67cf9eba3c06f36e032b643755
	Jan 09 00:17:40 ingress-addon-legacy-037418 kubelet[1614]: E0109 00:17:40.140513    1614 pod_workers.go:191] Error syncing pod 6fe3ed93-4ee0-43bd-9b90-fa4d287b1511 ("hello-world-app-5f5d8b66bb-csf29_default(6fe3ed93-4ee0-43bd-9b90-fa4d287b1511)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-csf29_default(6fe3ed93-4ee0-43bd-9b90-fa4d287b1511)"
	Jan 09 00:17:43 ingress-addon-legacy-037418 kubelet[1614]: W0109 00:17:43.146235    1614 pod_container_deletor.go:77] Container "ca9238e7cbf6afb3de4a746c6b46b5593f9a1fd22ddfd5980a80c1f372fd803a" not found in pod's containers
	Jan 09 00:17:44 ingress-addon-legacy-037418 kubelet[1614]: I0109 00:17:44.228169    1614 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/a0e96d08-6e4f-4edb-8eaf-6589a6d22fe6-webhook-cert") pod "a0e96d08-6e4f-4edb-8eaf-6589a6d22fe6" (UID: "a0e96d08-6e4f-4edb-8eaf-6589a6d22fe6")
	Jan 09 00:17:44 ingress-addon-legacy-037418 kubelet[1614]: I0109 00:17:44.228236    1614 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-v8xnx" (UniqueName: "kubernetes.io/secret/a0e96d08-6e4f-4edb-8eaf-6589a6d22fe6-ingress-nginx-token-v8xnx") pod "a0e96d08-6e4f-4edb-8eaf-6589a6d22fe6" (UID: "a0e96d08-6e4f-4edb-8eaf-6589a6d22fe6")
	Jan 09 00:17:44 ingress-addon-legacy-037418 kubelet[1614]: I0109 00:17:44.234978    1614 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e96d08-6e4f-4edb-8eaf-6589a6d22fe6-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a0e96d08-6e4f-4edb-8eaf-6589a6d22fe6" (UID: "a0e96d08-6e4f-4edb-8eaf-6589a6d22fe6"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 09 00:17:44 ingress-addon-legacy-037418 kubelet[1614]: I0109 00:17:44.236736    1614 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e96d08-6e4f-4edb-8eaf-6589a6d22fe6-ingress-nginx-token-v8xnx" (OuterVolumeSpecName: "ingress-nginx-token-v8xnx") pod "a0e96d08-6e4f-4edb-8eaf-6589a6d22fe6" (UID: "a0e96d08-6e4f-4edb-8eaf-6589a6d22fe6"). InnerVolumeSpecName "ingress-nginx-token-v8xnx". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 09 00:17:44 ingress-addon-legacy-037418 kubelet[1614]: I0109 00:17:44.328537    1614 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/a0e96d08-6e4f-4edb-8eaf-6589a6d22fe6-webhook-cert") on node "ingress-addon-legacy-037418" DevicePath ""
	Jan 09 00:17:44 ingress-addon-legacy-037418 kubelet[1614]: I0109 00:17:44.328587    1614 reconciler.go:319] Volume detached for volume "ingress-nginx-token-v8xnx" (UniqueName: "kubernetes.io/secret/a0e96d08-6e4f-4edb-8eaf-6589a6d22fe6-ingress-nginx-token-v8xnx") on node "ingress-addon-legacy-037418" DevicePath ""
	
	
	==> storage-provisioner [567671397e2b74b104c7a76ff65d55b70f730a3e4f55487854dc150cba11360e] <==
	I0109 00:14:29.091325       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0109 00:14:29.108067       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0109 00:14:29.108163       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0109 00:14:29.115277       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0109 00:14:29.115556       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-037418_be215e42-15b8-4268-abcd-a60d5bbad3d0!
	I0109 00:14:29.115690       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"546c0aa6-bbd0-4b3e-8b4b-2258f543f175", APIVersion:"v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-037418_be215e42-15b8-4268-abcd-a60d5bbad3d0 became leader
	I0109 00:14:29.216513       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-037418_be215e42-15b8-4268-abcd-a60d5bbad3d0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-037418 -n ingress-addon-legacy-037418
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-037418 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (178.37s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- exec busybox-5bc68d56bd-4v5vc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- exec busybox-5bc68d56bd-4v5vc -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-979047 -- exec busybox-5bc68d56bd-4v5vc -- sh -c "ping -c 1 192.168.58.1": exit status 1 (222.178831ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-4v5vc): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- exec busybox-5bc68d56bd-bxf99 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- exec busybox-5bc68d56bd-bxf99 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-979047 -- exec busybox-5bc68d56bd-bxf99 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (216.240326ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-bxf99): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-979047
helpers_test.go:235: (dbg) docker inspect multinode-979047:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4ab6ef7ad13d9d90167e1fb36a66ae45b1b7b7b23777e167f992d915692cf603",
	        "Created": "2024-01-09T00:24:32.057652772Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1748004,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-09T00:24:32.385460342Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a5be0745bf7211988da1521fe4ee64cb5f5dee2ca8e3061f061c5272199c616c",
	        "ResolvConfPath": "/var/lib/docker/containers/4ab6ef7ad13d9d90167e1fb36a66ae45b1b7b7b23777e167f992d915692cf603/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4ab6ef7ad13d9d90167e1fb36a66ae45b1b7b7b23777e167f992d915692cf603/hostname",
	        "HostsPath": "/var/lib/docker/containers/4ab6ef7ad13d9d90167e1fb36a66ae45b1b7b7b23777e167f992d915692cf603/hosts",
	        "LogPath": "/var/lib/docker/containers/4ab6ef7ad13d9d90167e1fb36a66ae45b1b7b7b23777e167f992d915692cf603/4ab6ef7ad13d9d90167e1fb36a66ae45b1b7b7b23777e167f992d915692cf603-json.log",
	        "Name": "/multinode-979047",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-979047:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-979047",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0453810c35cde9282fa0cfca5bca5229db6bda9979e14dbad26bba5424123ddc-init/diff:/var/lib/docker/overlay2/a443ad727e446e5b332ea48292deac5ef22cb43b6aa42ee65e414679b2407c31/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0453810c35cde9282fa0cfca5bca5229db6bda9979e14dbad26bba5424123ddc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0453810c35cde9282fa0cfca5bca5229db6bda9979e14dbad26bba5424123ddc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0453810c35cde9282fa0cfca5bca5229db6bda9979e14dbad26bba5424123ddc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-979047",
	                "Source": "/var/lib/docker/volumes/multinode-979047/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-979047",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-979047",
	                "name.minikube.sigs.k8s.io": "multinode-979047",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2aa12c9c09be208121eb6ad86ab6d620eeb700fe70d385ac0832a877e488a3d3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34444"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34443"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34440"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34442"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34441"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2aa12c9c09be",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-979047": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4ab6ef7ad13d",
	                        "multinode-979047"
	                    ],
	                    "NetworkID": "65d7500bf19ca8abe386d6c4321b821d7446b688f8cb144c286a047f24eab33f",
	                    "EndpointID": "96e8b91a4da6813bd17cf0f662bc3a58912cfa5692a11ae5230df8bf1f4fad89",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-979047 -n multinode-979047
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-979047 logs -n 25: (1.50774673s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-327096                           | mount-start-2-327096 | jenkins | v1.32.0 | 09 Jan 24 00:24 UTC | 09 Jan 24 00:24 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-327096 ssh -- ls                    | mount-start-2-327096 | jenkins | v1.32.0 | 09 Jan 24 00:24 UTC | 09 Jan 24 00:24 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-325094                           | mount-start-1-325094 | jenkins | v1.32.0 | 09 Jan 24 00:24 UTC | 09 Jan 24 00:24 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-327096 ssh -- ls                    | mount-start-2-327096 | jenkins | v1.32.0 | 09 Jan 24 00:24 UTC | 09 Jan 24 00:24 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-327096                           | mount-start-2-327096 | jenkins | v1.32.0 | 09 Jan 24 00:24 UTC | 09 Jan 24 00:24 UTC |
	| start   | -p mount-start-2-327096                           | mount-start-2-327096 | jenkins | v1.32.0 | 09 Jan 24 00:24 UTC | 09 Jan 24 00:24 UTC |
	| ssh     | mount-start-2-327096 ssh -- ls                    | mount-start-2-327096 | jenkins | v1.32.0 | 09 Jan 24 00:24 UTC | 09 Jan 24 00:24 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-327096                           | mount-start-2-327096 | jenkins | v1.32.0 | 09 Jan 24 00:24 UTC | 09 Jan 24 00:24 UTC |
	| delete  | -p mount-start-1-325094                           | mount-start-1-325094 | jenkins | v1.32.0 | 09 Jan 24 00:24 UTC | 09 Jan 24 00:24 UTC |
	| start   | -p multinode-979047                               | multinode-979047     | jenkins | v1.32.0 | 09 Jan 24 00:24 UTC | 09 Jan 24 00:26 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-979047 -- apply -f                   | multinode-979047     | jenkins | v1.32.0 | 09 Jan 24 00:26 UTC | 09 Jan 24 00:26 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-979047 -- rollout                    | multinode-979047     | jenkins | v1.32.0 | 09 Jan 24 00:26 UTC | 09 Jan 24 00:26 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-979047 -- get pods -o                | multinode-979047     | jenkins | v1.32.0 | 09 Jan 24 00:26 UTC | 09 Jan 24 00:26 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-979047 -- get pods -o                | multinode-979047     | jenkins | v1.32.0 | 09 Jan 24 00:26 UTC | 09 Jan 24 00:26 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-979047 -- exec                       | multinode-979047     | jenkins | v1.32.0 | 09 Jan 24 00:26 UTC | 09 Jan 24 00:26 UTC |
	|         | busybox-5bc68d56bd-4v5vc --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-979047 -- exec                       | multinode-979047     | jenkins | v1.32.0 | 09 Jan 24 00:26 UTC | 09 Jan 24 00:26 UTC |
	|         | busybox-5bc68d56bd-bxf99 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-979047 -- exec                       | multinode-979047     | jenkins | v1.32.0 | 09 Jan 24 00:26 UTC | 09 Jan 24 00:26 UTC |
	|         | busybox-5bc68d56bd-4v5vc --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-979047 -- exec                       | multinode-979047     | jenkins | v1.32.0 | 09 Jan 24 00:26 UTC | 09 Jan 24 00:26 UTC |
	|         | busybox-5bc68d56bd-bxf99 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-979047 -- exec                       | multinode-979047     | jenkins | v1.32.0 | 09 Jan 24 00:26 UTC | 09 Jan 24 00:26 UTC |
	|         | busybox-5bc68d56bd-4v5vc -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-979047 -- exec                       | multinode-979047     | jenkins | v1.32.0 | 09 Jan 24 00:26 UTC | 09 Jan 24 00:26 UTC |
	|         | busybox-5bc68d56bd-bxf99 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-979047 -- get pods -o                | multinode-979047     | jenkins | v1.32.0 | 09 Jan 24 00:26 UTC | 09 Jan 24 00:26 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-979047 -- exec                       | multinode-979047     | jenkins | v1.32.0 | 09 Jan 24 00:26 UTC | 09 Jan 24 00:26 UTC |
	|         | busybox-5bc68d56bd-4v5vc                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-979047 -- exec                       | multinode-979047     | jenkins | v1.32.0 | 09 Jan 24 00:26 UTC |                     |
	|         | busybox-5bc68d56bd-4v5vc -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-979047 -- exec                       | multinode-979047     | jenkins | v1.32.0 | 09 Jan 24 00:26 UTC | 09 Jan 24 00:26 UTC |
	|         | busybox-5bc68d56bd-bxf99                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-979047 -- exec                       | multinode-979047     | jenkins | v1.32.0 | 09 Jan 24 00:26 UTC |                     |
	|         | busybox-5bc68d56bd-bxf99 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/09 00:24:26
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0109 00:24:26.717182 1747564 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:24:26.717427 1747564 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:24:26.717454 1747564 out.go:309] Setting ErrFile to fd 2...
	I0109 00:24:26.717473 1747564 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:24:26.717747 1747564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-1678586/.minikube/bin
	I0109 00:24:26.718212 1747564 out.go:303] Setting JSON to false
	I0109 00:24:26.719138 1747564 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":25609,"bootTime":1704734258,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0109 00:24:26.719243 1747564 start.go:138] virtualization:  
	I0109 00:24:26.721970 1747564 out.go:177] * [multinode-979047] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0109 00:24:26.724446 1747564 out.go:177]   - MINIKUBE_LOCATION=17830
	I0109 00:24:26.726347 1747564 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:24:26.724595 1747564 notify.go:220] Checking for updates...
	I0109 00:24:26.731261 1747564 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-1678586/kubeconfig
	I0109 00:24:26.733211 1747564 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-1678586/.minikube
	I0109 00:24:26.735191 1747564 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0109 00:24:26.737026 1747564 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0109 00:24:26.739054 1747564 driver.go:392] Setting default libvirt URI to qemu:///system
	I0109 00:24:26.762839 1747564 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0109 00:24:26.762975 1747564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0109 00:24:26.841478 1747564 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-09 00:24:26.831984248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0109 00:24:26.841584 1747564 docker.go:295] overlay module found
	I0109 00:24:26.845058 1747564 out.go:177] * Using the docker driver based on user configuration
	I0109 00:24:26.846806 1747564 start.go:298] selected driver: docker
	I0109 00:24:26.846831 1747564 start.go:902] validating driver "docker" against <nil>
	I0109 00:24:26.846845 1747564 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0109 00:24:26.847476 1747564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0109 00:24:26.912863 1747564 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-09 00:24:26.903629711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0109 00:24:26.913019 1747564 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0109 00:24:26.913261 1747564 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0109 00:24:26.915412 1747564 out.go:177] * Using Docker driver with root privileges
	I0109 00:24:26.917325 1747564 cni.go:84] Creating CNI manager for ""
	I0109 00:24:26.917345 1747564 cni.go:136] 0 nodes found, recommending kindnet
	I0109 00:24:26.917356 1747564 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0109 00:24:26.917369 1747564 start_flags.go:323] config:
	{Name:multinode-979047 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-979047 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:24:26.920715 1747564 out.go:177] * Starting control plane node multinode-979047 in cluster multinode-979047
	I0109 00:24:26.922782 1747564 cache.go:121] Beginning downloading kic base image for docker with crio
	I0109 00:24:26.924480 1747564 out.go:177] * Pulling base image v0.0.42-1704751654-17830 ...
	I0109 00:24:26.926304 1747564 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:24:26.926348 1747564 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0109 00:24:26.926365 1747564 cache.go:56] Caching tarball of preloaded images
	I0109 00:24:26.926390 1747564 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon
	I0109 00:24:26.926475 1747564 preload.go:174] Found /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0109 00:24:26.926487 1747564 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0109 00:24:26.926853 1747564 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/config.json ...
	I0109 00:24:26.926882 1747564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/config.json: {Name:mk7b6828cb6cfcbf5469bbf776ef8a4d442f1464 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:24:26.943683 1747564 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon, skipping pull
	I0109 00:24:26.943708 1747564 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 exists in daemon, skipping load
	I0109 00:24:26.943729 1747564 cache.go:194] Successfully downloaded all kic artifacts
	I0109 00:24:26.943792 1747564 start.go:365] acquiring machines lock for multinode-979047: {Name:mk4b9545b96c9ebd2695db580382cd2122c47613 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:24:26.943905 1747564 start.go:369] acquired machines lock for "multinode-979047" in 90.889µs
	I0109 00:24:26.943934 1747564 start.go:93] Provisioning new machine with config: &{Name:multinode-979047 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-979047 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:24:26.944017 1747564 start.go:125] createHost starting for "" (driver="docker")
	I0109 00:24:26.947543 1747564 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0109 00:24:26.947784 1747564 start.go:159] libmachine.API.Create for "multinode-979047" (driver="docker")
	I0109 00:24:26.947848 1747564 client.go:168] LocalClient.Create starting
	I0109 00:24:26.947932 1747564 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem
	I0109 00:24:26.947979 1747564 main.go:141] libmachine: Decoding PEM data...
	I0109 00:24:26.947997 1747564 main.go:141] libmachine: Parsing certificate...
	I0109 00:24:26.948052 1747564 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem
	I0109 00:24:26.948078 1747564 main.go:141] libmachine: Decoding PEM data...
	I0109 00:24:26.948096 1747564 main.go:141] libmachine: Parsing certificate...
	I0109 00:24:26.948459 1747564 cli_runner.go:164] Run: docker network inspect multinode-979047 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0109 00:24:26.964978 1747564 cli_runner.go:211] docker network inspect multinode-979047 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0109 00:24:26.965057 1747564 network_create.go:281] running [docker network inspect multinode-979047] to gather additional debugging logs...
	I0109 00:24:26.965097 1747564 cli_runner.go:164] Run: docker network inspect multinode-979047
	W0109 00:24:26.981905 1747564 cli_runner.go:211] docker network inspect multinode-979047 returned with exit code 1
	I0109 00:24:26.981941 1747564 network_create.go:284] error running [docker network inspect multinode-979047]: docker network inspect multinode-979047: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-979047 not found
	I0109 00:24:26.981964 1747564 network_create.go:286] output of [docker network inspect multinode-979047]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-979047 not found
	
	** /stderr **
	I0109 00:24:26.982067 1747564 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0109 00:24:26.999026 1747564 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-105ffd575afe IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d2:7c:7b:ae} reservation:<nil>}
	I0109 00:24:26.999380 1747564 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400250dcc0}
	I0109 00:24:26.999401 1747564 network_create.go:124] attempt to create docker network multinode-979047 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0109 00:24:26.999462 1747564 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-979047 multinode-979047
	I0109 00:24:27.072844 1747564 network_create.go:108] docker network multinode-979047 192.168.58.0/24 created
	I0109 00:24:27.072879 1747564 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-979047" container
	I0109 00:24:27.072963 1747564 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0109 00:24:27.089594 1747564 cli_runner.go:164] Run: docker volume create multinode-979047 --label name.minikube.sigs.k8s.io=multinode-979047 --label created_by.minikube.sigs.k8s.io=true
	I0109 00:24:27.108509 1747564 oci.go:103] Successfully created a docker volume multinode-979047
	I0109 00:24:27.108595 1747564 cli_runner.go:164] Run: docker run --rm --name multinode-979047-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-979047 --entrypoint /usr/bin/test -v multinode-979047:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -d /var/lib
	I0109 00:24:27.683202 1747564 oci.go:107] Successfully prepared a docker volume multinode-979047
	I0109 00:24:27.683260 1747564 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:24:27.683282 1747564 kic.go:194] Starting extracting preloaded images to volume ...
	I0109 00:24:27.683376 1747564 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-979047:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -I lz4 -xf /preloaded.tar -C /extractDir
	I0109 00:24:31.964044 1747564 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-979047:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -I lz4 -xf /preloaded.tar -C /extractDir: (4.280632524s)
	I0109 00:24:31.964080 1747564 kic.go:203] duration metric: took 4.280794 seconds to extract preloaded images to volume
	W0109 00:24:31.964222 1747564 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0109 00:24:31.964339 1747564 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0109 00:24:32.041245 1747564 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-979047 --name multinode-979047 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-979047 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-979047 --network multinode-979047 --ip 192.168.58.2 --volume multinode-979047:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617
	I0109 00:24:32.393141 1747564 cli_runner.go:164] Run: docker container inspect multinode-979047 --format={{.State.Running}}
	I0109 00:24:32.415888 1747564 cli_runner.go:164] Run: docker container inspect multinode-979047 --format={{.State.Status}}
	I0109 00:24:32.444184 1747564 cli_runner.go:164] Run: docker exec multinode-979047 stat /var/lib/dpkg/alternatives/iptables
	I0109 00:24:32.532329 1747564 oci.go:144] the created container "multinode-979047" has a running status.
	I0109 00:24:32.532360 1747564 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/multinode-979047/id_rsa...
	I0109 00:24:33.542077 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/multinode-979047/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0109 00:24:33.542129 1747564 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/multinode-979047/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0109 00:24:33.565135 1747564 cli_runner.go:164] Run: docker container inspect multinode-979047 --format={{.State.Status}}
	I0109 00:24:33.589678 1747564 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0109 00:24:33.589711 1747564 kic_runner.go:114] Args: [docker exec --privileged multinode-979047 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0109 00:24:33.665121 1747564 cli_runner.go:164] Run: docker container inspect multinode-979047 --format={{.State.Status}}
	I0109 00:24:33.688597 1747564 machine.go:88] provisioning docker machine ...
	I0109 00:24:33.688631 1747564 ubuntu.go:169] provisioning hostname "multinode-979047"
	I0109 00:24:33.688696 1747564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-979047
	I0109 00:24:33.706865 1747564 main.go:141] libmachine: Using SSH client type: native
	I0109 00:24:33.707364 1747564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34444 <nil> <nil>}
	I0109 00:24:33.707383 1747564 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-979047 && echo "multinode-979047" | sudo tee /etc/hostname
	I0109 00:24:33.868721 1747564 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-979047
	
	I0109 00:24:33.868817 1747564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-979047
	I0109 00:24:33.887466 1747564 main.go:141] libmachine: Using SSH client type: native
	I0109 00:24:33.887887 1747564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34444 <nil> <nil>}
	I0109 00:24:33.887911 1747564 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-979047' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-979047/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-979047' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:24:34.035431 1747564 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:24:34.035500 1747564 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17830-1678586/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-1678586/.minikube}
	I0109 00:24:34.035532 1747564 ubuntu.go:177] setting up certificates
	I0109 00:24:34.035554 1747564 provision.go:83] configureAuth start
	I0109 00:24:34.035673 1747564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-979047
	I0109 00:24:34.053990 1747564 provision.go:138] copyHostCerts
	I0109 00:24:34.054027 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem
	I0109 00:24:34.054057 1747564 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem, removing ...
	I0109 00:24:34.054063 1747564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem
	I0109 00:24:34.054138 1747564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem (1082 bytes)
	I0109 00:24:34.054266 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem
	I0109 00:24:34.054283 1747564 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem, removing ...
	I0109 00:24:34.054289 1747564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem
	I0109 00:24:34.054316 1747564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem (1123 bytes)
	I0109 00:24:34.054356 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem
	I0109 00:24:34.054370 1747564 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem, removing ...
	I0109 00:24:34.054374 1747564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem
	I0109 00:24:34.054397 1747564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem (1679 bytes)
	I0109 00:24:34.054602 1747564 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca-key.pem org=jenkins.multinode-979047 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-979047]
	I0109 00:24:34.673257 1747564 provision.go:172] copyRemoteCerts
	I0109 00:24:34.673347 1747564 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:24:34.673391 1747564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-979047
	I0109 00:24:34.690372 1747564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34444 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/multinode-979047/id_rsa Username:docker}
	I0109 00:24:34.796412 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0109 00:24:34.796472 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:24:34.823623 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0109 00:24:34.823688 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0109 00:24:34.851724 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0109 00:24:34.851785 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0109 00:24:34.879996 1747564 provision.go:86] duration metric: configureAuth took 844.396509ms
	I0109 00:24:34.880026 1747564 ubuntu.go:193] setting minikube options for container-runtime
	I0109 00:24:34.880221 1747564 config.go:182] Loaded profile config "multinode-979047": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:24:34.880328 1747564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-979047
	I0109 00:24:34.897384 1747564 main.go:141] libmachine: Using SSH client type: native
	I0109 00:24:34.897824 1747564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34444 <nil> <nil>}
	I0109 00:24:34.897846 1747564 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:24:35.152873 1747564 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:24:35.152969 1747564 machine.go:91] provisioned docker machine in 1.464347507s
	I0109 00:24:35.152994 1747564 client.go:171] LocalClient.Create took 8.205135513s
	I0109 00:24:35.153045 1747564 start.go:167] duration metric: libmachine.API.Create for "multinode-979047" took 8.205261692s
	I0109 00:24:35.153073 1747564 start.go:300] post-start starting for "multinode-979047" (driver="docker")
	I0109 00:24:35.153112 1747564 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:24:35.153205 1747564 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:24:35.153276 1747564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-979047
	I0109 00:24:35.171870 1747564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34444 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/multinode-979047/id_rsa Username:docker}
	I0109 00:24:35.281122 1747564 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:24:35.284834 1747564 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0109 00:24:35.284856 1747564 command_runner.go:130] > NAME="Ubuntu"
	I0109 00:24:35.284864 1747564 command_runner.go:130] > VERSION_ID="22.04"
	I0109 00:24:35.284871 1747564 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0109 00:24:35.284878 1747564 command_runner.go:130] > VERSION_CODENAME=jammy
	I0109 00:24:35.284882 1747564 command_runner.go:130] > ID=ubuntu
	I0109 00:24:35.284887 1747564 command_runner.go:130] > ID_LIKE=debian
	I0109 00:24:35.284896 1747564 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0109 00:24:35.284903 1747564 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0109 00:24:35.284911 1747564 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0109 00:24:35.284922 1747564 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0109 00:24:35.284927 1747564 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0109 00:24:35.285234 1747564 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0109 00:24:35.285265 1747564 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0109 00:24:35.285278 1747564 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0109 00:24:35.285288 1747564 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0109 00:24:35.285298 1747564 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-1678586/.minikube/addons for local assets ...
	I0109 00:24:35.285356 1747564 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-1678586/.minikube/files for local assets ...
	I0109 00:24:35.285434 1747564 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem -> 16839672.pem in /etc/ssl/certs
	I0109 00:24:35.285444 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem -> /etc/ssl/certs/16839672.pem
	I0109 00:24:35.285546 1747564 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:24:35.295671 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem --> /etc/ssl/certs/16839672.pem (1708 bytes)
	I0109 00:24:35.323220 1747564 start.go:303] post-start completed in 170.10553ms
	I0109 00:24:35.323680 1747564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-979047
	I0109 00:24:35.340511 1747564 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/config.json ...
	I0109 00:24:35.340777 1747564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0109 00:24:35.340830 1747564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-979047
	I0109 00:24:35.357844 1747564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34444 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/multinode-979047/id_rsa Username:docker}
	I0109 00:24:35.455917 1747564 command_runner.go:130] > 14%!
	(MISSING)I0109 00:24:35.456447 1747564 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0109 00:24:35.461822 1747564 command_runner.go:130] > 168G
	I0109 00:24:35.462302 1747564 start.go:128] duration metric: createHost completed in 8.518272422s
	I0109 00:24:35.462320 1747564 start.go:83] releasing machines lock for "multinode-979047", held for 8.518403681s
	I0109 00:24:35.462392 1747564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-979047
	I0109 00:24:35.479215 1747564 ssh_runner.go:195] Run: cat /version.json
	I0109 00:24:35.479234 1747564 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:24:35.479266 1747564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-979047
	I0109 00:24:35.479302 1747564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-979047
	I0109 00:24:35.498256 1747564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34444 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/multinode-979047/id_rsa Username:docker}
	I0109 00:24:35.516238 1747564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34444 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/multinode-979047/id_rsa Username:docker}
	I0109 00:24:35.598541 1747564 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1704751654-17830", "minikube_version": "v1.32.0", "commit": "8e62236f86fac88150e437f293b77692cc68cda5"}
	I0109 00:24:35.598685 1747564 ssh_runner.go:195] Run: systemctl --version
	I0109 00:24:35.730892 1747564 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0109 00:24:35.733883 1747564 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I0109 00:24:35.733917 1747564 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0109 00:24:35.733993 1747564 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:24:35.878755 1747564 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0109 00:24:35.884055 1747564 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0109 00:24:35.884077 1747564 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0109 00:24:35.884084 1747564 command_runner.go:130] > Device: 3ah/58d	Inode: 2083141     Links: 1
	I0109 00:24:35.884092 1747564 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0109 00:24:35.884099 1747564 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0109 00:24:35.884106 1747564 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0109 00:24:35.884112 1747564 command_runner.go:130] > Change: 2024-01-09 00:01:33.738751998 +0000
	I0109 00:24:35.884118 1747564 command_runner.go:130] >  Birth: 2024-01-09 00:01:33.738751998 +0000
	I0109 00:24:35.884172 1747564 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:24:35.908482 1747564 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0109 00:24:35.908559 1747564 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:24:35.947182 1747564 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0109 00:24:35.947207 1747564 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0109 00:24:35.947215 1747564 start.go:475] detecting cgroup driver to use...
	I0109 00:24:35.947245 1747564 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0109 00:24:35.947304 1747564 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:24:35.965646 1747564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:24:35.979294 1747564 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:24:35.979380 1747564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:24:35.994225 1747564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:24:36.012032 1747564 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0109 00:24:36.108736 1747564 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:24:36.204806 1747564 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0109 00:24:36.204838 1747564 docker.go:219] disabling docker service ...
	I0109 00:24:36.204914 1747564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:24:36.227623 1747564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:24:36.241403 1747564 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:24:36.330992 1747564 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0109 00:24:36.331265 1747564 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:24:36.436571 1747564 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0109 00:24:36.436673 1747564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:24:36.449583 1747564 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:24:36.469332 1747564 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0109 00:24:36.469370 1747564 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0109 00:24:36.469420 1747564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:24:36.480999 1747564 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0109 00:24:36.481082 1747564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:24:36.492736 1747564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:24:36.504374 1747564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:24:36.516280 1747564 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:24:36.527310 1747564 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:24:36.537439 1747564 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0109 00:24:36.537536 1747564 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:24:36.548073 1747564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:24:36.634811 1747564 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0109 00:24:36.754919 1747564 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0109 00:24:36.755067 1747564 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0109 00:24:36.759655 1747564 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0109 00:24:36.759676 1747564 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0109 00:24:36.759684 1747564 command_runner.go:130] > Device: 43h/67d	Inode: 186         Links: 1
	I0109 00:24:36.759700 1747564 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0109 00:24:36.759710 1747564 command_runner.go:130] > Access: 2024-01-09 00:24:36.738982220 +0000
	I0109 00:24:36.759717 1747564 command_runner.go:130] > Modify: 2024-01-09 00:24:36.738982220 +0000
	I0109 00:24:36.759726 1747564 command_runner.go:130] > Change: 2024-01-09 00:24:36.738982220 +0000
	I0109 00:24:36.759731 1747564 command_runner.go:130] >  Birth: -
	I0109 00:24:36.759765 1747564 start.go:543] Will wait 60s for crictl version
	I0109 00:24:36.759815 1747564 ssh_runner.go:195] Run: which crictl
	I0109 00:24:36.764086 1747564 command_runner.go:130] > /usr/bin/crictl
	I0109 00:24:36.764153 1747564 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:24:36.806821 1747564 command_runner.go:130] > Version:  0.1.0
	I0109 00:24:36.806860 1747564 command_runner.go:130] > RuntimeName:  cri-o
	I0109 00:24:36.806867 1747564 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0109 00:24:36.806874 1747564 command_runner.go:130] > RuntimeApiVersion:  v1
	I0109 00:24:36.809250 1747564 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0109 00:24:36.809340 1747564 ssh_runner.go:195] Run: crio --version
	I0109 00:24:36.851632 1747564 command_runner.go:130] > crio version 1.24.6
	I0109 00:24:36.851655 1747564 command_runner.go:130] > Version:          1.24.6
	I0109 00:24:36.851664 1747564 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0109 00:24:36.851670 1747564 command_runner.go:130] > GitTreeState:     clean
	I0109 00:24:36.851677 1747564 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0109 00:24:36.851682 1747564 command_runner.go:130] > GoVersion:        go1.18.2
	I0109 00:24:36.851687 1747564 command_runner.go:130] > Compiler:         gc
	I0109 00:24:36.851695 1747564 command_runner.go:130] > Platform:         linux/arm64
	I0109 00:24:36.851705 1747564 command_runner.go:130] > Linkmode:         dynamic
	I0109 00:24:36.851715 1747564 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0109 00:24:36.851722 1747564 command_runner.go:130] > SeccompEnabled:   true
	I0109 00:24:36.851728 1747564 command_runner.go:130] > AppArmorEnabled:  false
	I0109 00:24:36.853726 1747564 ssh_runner.go:195] Run: crio --version
	I0109 00:24:36.893776 1747564 command_runner.go:130] > crio version 1.24.6
	I0109 00:24:36.893799 1747564 command_runner.go:130] > Version:          1.24.6
	I0109 00:24:36.893808 1747564 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0109 00:24:36.893813 1747564 command_runner.go:130] > GitTreeState:     clean
	I0109 00:24:36.893822 1747564 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0109 00:24:36.893828 1747564 command_runner.go:130] > GoVersion:        go1.18.2
	I0109 00:24:36.893833 1747564 command_runner.go:130] > Compiler:         gc
	I0109 00:24:36.893845 1747564 command_runner.go:130] > Platform:         linux/arm64
	I0109 00:24:36.893854 1747564 command_runner.go:130] > Linkmode:         dynamic
	I0109 00:24:36.893866 1747564 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0109 00:24:36.893874 1747564 command_runner.go:130] > SeccompEnabled:   true
	I0109 00:24:36.893880 1747564 command_runner.go:130] > AppArmorEnabled:  false
	I0109 00:24:36.898370 1747564 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0109 00:24:36.900402 1747564 cli_runner.go:164] Run: docker network inspect multinode-979047 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0109 00:24:36.917067 1747564 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0109 00:24:36.921891 1747564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:24:36.934990 1747564 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:24:36.935070 1747564 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:24:36.998587 1747564 command_runner.go:130] > {
	I0109 00:24:36.998617 1747564 command_runner.go:130] >   "images": [
	I0109 00:24:36.998624 1747564 command_runner.go:130] >     {
	I0109 00:24:36.998634 1747564 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I0109 00:24:36.998639 1747564 command_runner.go:130] >       "repoTags": [
	I0109 00:24:36.998646 1747564 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0109 00:24:36.998651 1747564 command_runner.go:130] >       ],
	I0109 00:24:36.998656 1747564 command_runner.go:130] >       "repoDigests": [
	I0109 00:24:36.998666 1747564 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0109 00:24:36.998679 1747564 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I0109 00:24:36.998687 1747564 command_runner.go:130] >       ],
	I0109 00:24:36.998692 1747564 command_runner.go:130] >       "size": "60867618",
	I0109 00:24:36.998698 1747564 command_runner.go:130] >       "uid": null,
	I0109 00:24:36.998705 1747564 command_runner.go:130] >       "username": "",
	I0109 00:24:36.998714 1747564 command_runner.go:130] >       "spec": null,
	I0109 00:24:36.998723 1747564 command_runner.go:130] >       "pinned": false
	I0109 00:24:36.998727 1747564 command_runner.go:130] >     },
	I0109 00:24:36.998731 1747564 command_runner.go:130] >     {
	I0109 00:24:36.998742 1747564 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0109 00:24:36.998749 1747564 command_runner.go:130] >       "repoTags": [
	I0109 00:24:36.998756 1747564 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0109 00:24:36.998763 1747564 command_runner.go:130] >       ],
	I0109 00:24:36.998769 1747564 command_runner.go:130] >       "repoDigests": [
	I0109 00:24:36.998779 1747564 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0109 00:24:36.998796 1747564 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0109 00:24:36.998803 1747564 command_runner.go:130] >       ],
	I0109 00:24:36.998814 1747564 command_runner.go:130] >       "size": "29037500",
	I0109 00:24:36.998822 1747564 command_runner.go:130] >       "uid": null,
	I0109 00:24:36.998827 1747564 command_runner.go:130] >       "username": "",
	I0109 00:24:36.998832 1747564 command_runner.go:130] >       "spec": null,
	I0109 00:24:36.998839 1747564 command_runner.go:130] >       "pinned": false
	I0109 00:24:36.998844 1747564 command_runner.go:130] >     },
	I0109 00:24:36.998858 1747564 command_runner.go:130] >     {
	I0109 00:24:36.998865 1747564 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0109 00:24:36.998873 1747564 command_runner.go:130] >       "repoTags": [
	I0109 00:24:36.998879 1747564 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0109 00:24:36.998889 1747564 command_runner.go:130] >       ],
	I0109 00:24:36.998897 1747564 command_runner.go:130] >       "repoDigests": [
	I0109 00:24:36.998906 1747564 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0109 00:24:36.998919 1747564 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0109 00:24:36.998924 1747564 command_runner.go:130] >       ],
	I0109 00:24:36.998929 1747564 command_runner.go:130] >       "size": "51393451",
	I0109 00:24:36.998938 1747564 command_runner.go:130] >       "uid": null,
	I0109 00:24:36.998943 1747564 command_runner.go:130] >       "username": "",
	I0109 00:24:36.998948 1747564 command_runner.go:130] >       "spec": null,
	I0109 00:24:36.998955 1747564 command_runner.go:130] >       "pinned": false
	I0109 00:24:36.998960 1747564 command_runner.go:130] >     },
	I0109 00:24:36.998966 1747564 command_runner.go:130] >     {
	I0109 00:24:36.998974 1747564 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I0109 00:24:36.998981 1747564 command_runner.go:130] >       "repoTags": [
	I0109 00:24:36.998987 1747564 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0109 00:24:36.998995 1747564 command_runner.go:130] >       ],
	I0109 00:24:36.999000 1747564 command_runner.go:130] >       "repoDigests": [
	I0109 00:24:36.999009 1747564 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I0109 00:24:36.999023 1747564 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I0109 00:24:36.999038 1747564 command_runner.go:130] >       ],
	I0109 00:24:36.999046 1747564 command_runner.go:130] >       "size": "182203183",
	I0109 00:24:36.999051 1747564 command_runner.go:130] >       "uid": {
	I0109 00:24:36.999062 1747564 command_runner.go:130] >         "value": "0"
	I0109 00:24:36.999066 1747564 command_runner.go:130] >       },
	I0109 00:24:36.999071 1747564 command_runner.go:130] >       "username": "",
	I0109 00:24:36.999081 1747564 command_runner.go:130] >       "spec": null,
	I0109 00:24:36.999087 1747564 command_runner.go:130] >       "pinned": false
	I0109 00:24:36.999097 1747564 command_runner.go:130] >     },
	I0109 00:24:36.999101 1747564 command_runner.go:130] >     {
	I0109 00:24:36.999109 1747564 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I0109 00:24:36.999117 1747564 command_runner.go:130] >       "repoTags": [
	I0109 00:24:36.999123 1747564 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0109 00:24:36.999127 1747564 command_runner.go:130] >       ],
	I0109 00:24:36.999132 1747564 command_runner.go:130] >       "repoDigests": [
	I0109 00:24:36.999141 1747564 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I0109 00:24:36.999154 1747564 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I0109 00:24:36.999161 1747564 command_runner.go:130] >       ],
	I0109 00:24:36.999169 1747564 command_runner.go:130] >       "size": "121119694",
	I0109 00:24:36.999174 1747564 command_runner.go:130] >       "uid": {
	I0109 00:24:36.999185 1747564 command_runner.go:130] >         "value": "0"
	I0109 00:24:36.999190 1747564 command_runner.go:130] >       },
	I0109 00:24:36.999195 1747564 command_runner.go:130] >       "username": "",
	I0109 00:24:36.999202 1747564 command_runner.go:130] >       "spec": null,
	I0109 00:24:36.999207 1747564 command_runner.go:130] >       "pinned": false
	I0109 00:24:36.999211 1747564 command_runner.go:130] >     },
	I0109 00:24:36.999216 1747564 command_runner.go:130] >     {
	I0109 00:24:36.999226 1747564 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I0109 00:24:36.999232 1747564 command_runner.go:130] >       "repoTags": [
	I0109 00:24:36.999240 1747564 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0109 00:24:36.999247 1747564 command_runner.go:130] >       ],
	I0109 00:24:36.999252 1747564 command_runner.go:130] >       "repoDigests": [
	I0109 00:24:36.999262 1747564 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0109 00:24:36.999275 1747564 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I0109 00:24:36.999280 1747564 command_runner.go:130] >       ],
	I0109 00:24:36.999289 1747564 command_runner.go:130] >       "size": "117252916",
	I0109 00:24:36.999293 1747564 command_runner.go:130] >       "uid": {
	I0109 00:24:36.999299 1747564 command_runner.go:130] >         "value": "0"
	I0109 00:24:36.999303 1747564 command_runner.go:130] >       },
	I0109 00:24:36.999310 1747564 command_runner.go:130] >       "username": "",
	I0109 00:24:36.999315 1747564 command_runner.go:130] >       "spec": null,
	I0109 00:24:36.999322 1747564 command_runner.go:130] >       "pinned": false
	I0109 00:24:36.999328 1747564 command_runner.go:130] >     },
	I0109 00:24:36.999333 1747564 command_runner.go:130] >     {
	I0109 00:24:36.999343 1747564 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I0109 00:24:36.999348 1747564 command_runner.go:130] >       "repoTags": [
	I0109 00:24:36.999356 1747564 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0109 00:24:36.999360 1747564 command_runner.go:130] >       ],
	I0109 00:24:36.999366 1747564 command_runner.go:130] >       "repoDigests": [
	I0109 00:24:36.999377 1747564 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I0109 00:24:36.999387 1747564 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0109 00:24:36.999393 1747564 command_runner.go:130] >       ],
	I0109 00:24:36.999398 1747564 command_runner.go:130] >       "size": "69992343",
	I0109 00:24:36.999406 1747564 command_runner.go:130] >       "uid": null,
	I0109 00:24:36.999416 1747564 command_runner.go:130] >       "username": "",
	I0109 00:24:36.999422 1747564 command_runner.go:130] >       "spec": null,
	I0109 00:24:36.999429 1747564 command_runner.go:130] >       "pinned": false
	I0109 00:24:36.999433 1747564 command_runner.go:130] >     },
	I0109 00:24:36.999438 1747564 command_runner.go:130] >     {
	I0109 00:24:36.999448 1747564 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I0109 00:24:36.999453 1747564 command_runner.go:130] >       "repoTags": [
	I0109 00:24:36.999461 1747564 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0109 00:24:36.999466 1747564 command_runner.go:130] >       ],
	I0109 00:24:36.999474 1747564 command_runner.go:130] >       "repoDigests": [
	I0109 00:24:36.999497 1747564 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0109 00:24:36.999512 1747564 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I0109 00:24:36.999516 1747564 command_runner.go:130] >       ],
	I0109 00:24:36.999524 1747564 command_runner.go:130] >       "size": "59253556",
	I0109 00:24:36.999528 1747564 command_runner.go:130] >       "uid": {
	I0109 00:24:36.999534 1747564 command_runner.go:130] >         "value": "0"
	I0109 00:24:36.999540 1747564 command_runner.go:130] >       },
	I0109 00:24:36.999548 1747564 command_runner.go:130] >       "username": "",
	I0109 00:24:36.999555 1747564 command_runner.go:130] >       "spec": null,
	I0109 00:24:36.999560 1747564 command_runner.go:130] >       "pinned": false
	I0109 00:24:36.999566 1747564 command_runner.go:130] >     },
	I0109 00:24:36.999573 1747564 command_runner.go:130] >     {
	I0109 00:24:36.999581 1747564 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0109 00:24:36.999588 1747564 command_runner.go:130] >       "repoTags": [
	I0109 00:24:36.999594 1747564 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0109 00:24:36.999601 1747564 command_runner.go:130] >       ],
	I0109 00:24:36.999606 1747564 command_runner.go:130] >       "repoDigests": [
	I0109 00:24:36.999618 1747564 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0109 00:24:36.999627 1747564 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0109 00:24:36.999634 1747564 command_runner.go:130] >       ],
	I0109 00:24:36.999639 1747564 command_runner.go:130] >       "size": "520014",
	I0109 00:24:36.999644 1747564 command_runner.go:130] >       "uid": {
	I0109 00:24:36.999652 1747564 command_runner.go:130] >         "value": "65535"
	I0109 00:24:36.999656 1747564 command_runner.go:130] >       },
	I0109 00:24:36.999661 1747564 command_runner.go:130] >       "username": "",
	I0109 00:24:36.999669 1747564 command_runner.go:130] >       "spec": null,
	I0109 00:24:36.999678 1747564 command_runner.go:130] >       "pinned": false
	I0109 00:24:36.999682 1747564 command_runner.go:130] >     }
	I0109 00:24:36.999686 1747564 command_runner.go:130] >   ]
	I0109 00:24:36.999693 1747564 command_runner.go:130] > }
	I0109 00:24:36.999895 1747564 crio.go:496] all images are preloaded for cri-o runtime.
	I0109 00:24:36.999909 1747564 crio.go:415] Images already preloaded, skipping extraction
	I0109 00:24:36.999963 1747564 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:24:37.042998 1747564 command_runner.go:130] > {
	I0109 00:24:37.043021 1747564 command_runner.go:130] >   "images": [
	I0109 00:24:37.043026 1747564 command_runner.go:130] >     {
	I0109 00:24:37.043035 1747564 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I0109 00:24:37.043040 1747564 command_runner.go:130] >       "repoTags": [
	I0109 00:24:37.043050 1747564 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0109 00:24:37.043055 1747564 command_runner.go:130] >       ],
	I0109 00:24:37.043060 1747564 command_runner.go:130] >       "repoDigests": [
	I0109 00:24:37.043082 1747564 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0109 00:24:37.043096 1747564 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I0109 00:24:37.043100 1747564 command_runner.go:130] >       ],
	I0109 00:24:37.043112 1747564 command_runner.go:130] >       "size": "60867618",
	I0109 00:24:37.043120 1747564 command_runner.go:130] >       "uid": null,
	I0109 00:24:37.043126 1747564 command_runner.go:130] >       "username": "",
	I0109 00:24:37.043134 1747564 command_runner.go:130] >       "spec": null,
	I0109 00:24:37.043139 1747564 command_runner.go:130] >       "pinned": false
	I0109 00:24:37.043143 1747564 command_runner.go:130] >     },
	I0109 00:24:37.043149 1747564 command_runner.go:130] >     {
	I0109 00:24:37.043157 1747564 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0109 00:24:37.043165 1747564 command_runner.go:130] >       "repoTags": [
	I0109 00:24:37.043171 1747564 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0109 00:24:37.043176 1747564 command_runner.go:130] >       ],
	I0109 00:24:37.043181 1747564 command_runner.go:130] >       "repoDigests": [
	I0109 00:24:37.043190 1747564 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0109 00:24:37.043200 1747564 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0109 00:24:37.043204 1747564 command_runner.go:130] >       ],
	I0109 00:24:37.043216 1747564 command_runner.go:130] >       "size": "29037500",
	I0109 00:24:37.043221 1747564 command_runner.go:130] >       "uid": null,
	I0109 00:24:37.043226 1747564 command_runner.go:130] >       "username": "",
	I0109 00:24:37.043230 1747564 command_runner.go:130] >       "spec": null,
	I0109 00:24:37.043235 1747564 command_runner.go:130] >       "pinned": false
	I0109 00:24:37.043239 1747564 command_runner.go:130] >     },
	I0109 00:24:37.043243 1747564 command_runner.go:130] >     {
	I0109 00:24:37.043253 1747564 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0109 00:24:37.043261 1747564 command_runner.go:130] >       "repoTags": [
	I0109 00:24:37.043267 1747564 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0109 00:24:37.043274 1747564 command_runner.go:130] >       ],
	I0109 00:24:37.043278 1747564 command_runner.go:130] >       "repoDigests": [
	I0109 00:24:37.043288 1747564 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0109 00:24:37.043299 1747564 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0109 00:24:37.043304 1747564 command_runner.go:130] >       ],
	I0109 00:24:37.043309 1747564 command_runner.go:130] >       "size": "51393451",
	I0109 00:24:37.043314 1747564 command_runner.go:130] >       "uid": null,
	I0109 00:24:37.043322 1747564 command_runner.go:130] >       "username": "",
	I0109 00:24:37.043328 1747564 command_runner.go:130] >       "spec": null,
	I0109 00:24:37.043333 1747564 command_runner.go:130] >       "pinned": false
	I0109 00:24:37.043338 1747564 command_runner.go:130] >     },
	I0109 00:24:37.043345 1747564 command_runner.go:130] >     {
	I0109 00:24:37.043352 1747564 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I0109 00:24:37.043360 1747564 command_runner.go:130] >       "repoTags": [
	I0109 00:24:37.043366 1747564 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0109 00:24:37.043370 1747564 command_runner.go:130] >       ],
	I0109 00:24:37.043375 1747564 command_runner.go:130] >       "repoDigests": [
	I0109 00:24:37.043387 1747564 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I0109 00:24:37.043396 1747564 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I0109 00:24:37.043408 1747564 command_runner.go:130] >       ],
	I0109 00:24:37.043414 1747564 command_runner.go:130] >       "size": "182203183",
	I0109 00:24:37.043419 1747564 command_runner.go:130] >       "uid": {
	I0109 00:24:37.043426 1747564 command_runner.go:130] >         "value": "0"
	I0109 00:24:37.043430 1747564 command_runner.go:130] >       },
	I0109 00:24:37.043435 1747564 command_runner.go:130] >       "username": "",
	I0109 00:24:37.043442 1747564 command_runner.go:130] >       "spec": null,
	I0109 00:24:37.043449 1747564 command_runner.go:130] >       "pinned": false
	I0109 00:24:37.043457 1747564 command_runner.go:130] >     },
	I0109 00:24:37.043461 1747564 command_runner.go:130] >     {
	I0109 00:24:37.043469 1747564 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I0109 00:24:37.043477 1747564 command_runner.go:130] >       "repoTags": [
	I0109 00:24:37.043484 1747564 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0109 00:24:37.043490 1747564 command_runner.go:130] >       ],
	I0109 00:24:37.043496 1747564 command_runner.go:130] >       "repoDigests": [
	I0109 00:24:37.043507 1747564 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I0109 00:24:37.043519 1747564 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I0109 00:24:37.043523 1747564 command_runner.go:130] >       ],
	I0109 00:24:37.043528 1747564 command_runner.go:130] >       "size": "121119694",
	I0109 00:24:37.043536 1747564 command_runner.go:130] >       "uid": {
	I0109 00:24:37.043540 1747564 command_runner.go:130] >         "value": "0"
	I0109 00:24:37.043545 1747564 command_runner.go:130] >       },
	I0109 00:24:37.043552 1747564 command_runner.go:130] >       "username": "",
	I0109 00:24:37.043557 1747564 command_runner.go:130] >       "spec": null,
	I0109 00:24:37.043562 1747564 command_runner.go:130] >       "pinned": false
	I0109 00:24:37.043570 1747564 command_runner.go:130] >     },
	I0109 00:24:37.043575 1747564 command_runner.go:130] >     {
	I0109 00:24:37.043582 1747564 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I0109 00:24:37.043590 1747564 command_runner.go:130] >       "repoTags": [
	I0109 00:24:37.043596 1747564 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0109 00:24:37.043601 1747564 command_runner.go:130] >       ],
	I0109 00:24:37.043609 1747564 command_runner.go:130] >       "repoDigests": [
	I0109 00:24:37.043619 1747564 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0109 00:24:37.043632 1747564 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I0109 00:24:37.043637 1747564 command_runner.go:130] >       ],
	I0109 00:24:37.043648 1747564 command_runner.go:130] >       "size": "117252916",
	I0109 00:24:37.043653 1747564 command_runner.go:130] >       "uid": {
	I0109 00:24:37.043658 1747564 command_runner.go:130] >         "value": "0"
	I0109 00:24:37.043663 1747564 command_runner.go:130] >       },
	I0109 00:24:37.043670 1747564 command_runner.go:130] >       "username": "",
	I0109 00:24:37.043675 1747564 command_runner.go:130] >       "spec": null,
	I0109 00:24:37.043680 1747564 command_runner.go:130] >       "pinned": false
	I0109 00:24:37.043686 1747564 command_runner.go:130] >     },
	I0109 00:24:37.043694 1747564 command_runner.go:130] >     {
	I0109 00:24:37.043706 1747564 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I0109 00:24:37.043710 1747564 command_runner.go:130] >       "repoTags": [
	I0109 00:24:37.043723 1747564 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0109 00:24:37.043728 1747564 command_runner.go:130] >       ],
	I0109 00:24:37.043733 1747564 command_runner.go:130] >       "repoDigests": [
	I0109 00:24:37.043744 1747564 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I0109 00:24:37.043753 1747564 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0109 00:24:37.043758 1747564 command_runner.go:130] >       ],
	I0109 00:24:37.043765 1747564 command_runner.go:130] >       "size": "69992343",
	I0109 00:24:37.043770 1747564 command_runner.go:130] >       "uid": null,
	I0109 00:24:37.043778 1747564 command_runner.go:130] >       "username": "",
	I0109 00:24:37.043782 1747564 command_runner.go:130] >       "spec": null,
	I0109 00:24:37.043787 1747564 command_runner.go:130] >       "pinned": false
	I0109 00:24:37.043793 1747564 command_runner.go:130] >     },
	I0109 00:24:37.043798 1747564 command_runner.go:130] >     {
	I0109 00:24:37.043809 1747564 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I0109 00:24:37.043815 1747564 command_runner.go:130] >       "repoTags": [
	I0109 00:24:37.043825 1747564 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0109 00:24:37.043829 1747564 command_runner.go:130] >       ],
	I0109 00:24:37.043834 1747564 command_runner.go:130] >       "repoDigests": [
	I0109 00:24:37.043852 1747564 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0109 00:24:37.043864 1747564 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I0109 00:24:37.043869 1747564 command_runner.go:130] >       ],
	I0109 00:24:37.043876 1747564 command_runner.go:130] >       "size": "59253556",
	I0109 00:24:37.043881 1747564 command_runner.go:130] >       "uid": {
	I0109 00:24:37.043888 1747564 command_runner.go:130] >         "value": "0"
	I0109 00:24:37.043892 1747564 command_runner.go:130] >       },
	I0109 00:24:37.043897 1747564 command_runner.go:130] >       "username": "",
	I0109 00:24:37.043905 1747564 command_runner.go:130] >       "spec": null,
	I0109 00:24:37.043909 1747564 command_runner.go:130] >       "pinned": false
	I0109 00:24:37.043913 1747564 command_runner.go:130] >     },
	I0109 00:24:37.043918 1747564 command_runner.go:130] >     {
	I0109 00:24:37.043925 1747564 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0109 00:24:37.043932 1747564 command_runner.go:130] >       "repoTags": [
	I0109 00:24:37.043938 1747564 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0109 00:24:37.043944 1747564 command_runner.go:130] >       ],
	I0109 00:24:37.043951 1747564 command_runner.go:130] >       "repoDigests": [
	I0109 00:24:37.043960 1747564 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0109 00:24:37.043973 1747564 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0109 00:24:37.043977 1747564 command_runner.go:130] >       ],
	I0109 00:24:37.043984 1747564 command_runner.go:130] >       "size": "520014",
	I0109 00:24:37.043989 1747564 command_runner.go:130] >       "uid": {
	I0109 00:24:37.043996 1747564 command_runner.go:130] >         "value": "65535"
	I0109 00:24:37.044001 1747564 command_runner.go:130] >       },
	I0109 00:24:37.044005 1747564 command_runner.go:130] >       "username": "",
	I0109 00:24:37.044012 1747564 command_runner.go:130] >       "spec": null,
	I0109 00:24:37.044020 1747564 command_runner.go:130] >       "pinned": false
	I0109 00:24:37.044028 1747564 command_runner.go:130] >     }
	I0109 00:24:37.044032 1747564 command_runner.go:130] >   ]
	I0109 00:24:37.044036 1747564 command_runner.go:130] > }
	I0109 00:24:37.044174 1747564 crio.go:496] all images are preloaded for cri-o runtime.
	I0109 00:24:37.044186 1747564 cache_images.go:84] Images are preloaded, skipping loading
	I0109 00:24:37.044265 1747564 ssh_runner.go:195] Run: crio config
	I0109 00:24:37.094873 1747564 command_runner.go:130] ! time="2024-01-09 00:24:37.094524975Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0109 00:24:37.095311 1747564 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0109 00:24:37.105613 1747564 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0109 00:24:37.105640 1747564 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0109 00:24:37.105648 1747564 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0109 00:24:37.105655 1747564 command_runner.go:130] > #
	I0109 00:24:37.105663 1747564 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0109 00:24:37.105671 1747564 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0109 00:24:37.105679 1747564 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0109 00:24:37.105692 1747564 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0109 00:24:37.105696 1747564 command_runner.go:130] > # reload'.
	I0109 00:24:37.105708 1747564 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0109 00:24:37.105715 1747564 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0109 00:24:37.105726 1747564 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0109 00:24:37.105736 1747564 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0109 00:24:37.105746 1747564 command_runner.go:130] > [crio]
	I0109 00:24:37.105756 1747564 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0109 00:24:37.105765 1747564 command_runner.go:130] > # containers images, in this directory.
	I0109 00:24:37.105774 1747564 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0109 00:24:37.105782 1747564 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0109 00:24:37.105791 1747564 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0109 00:24:37.105798 1747564 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0109 00:24:37.105805 1747564 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0109 00:24:37.105813 1747564 command_runner.go:130] > # storage_driver = "vfs"
	I0109 00:24:37.105820 1747564 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0109 00:24:37.105829 1747564 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0109 00:24:37.105836 1747564 command_runner.go:130] > # storage_option = [
	I0109 00:24:37.105840 1747564 command_runner.go:130] > # ]
	I0109 00:24:37.105848 1747564 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0109 00:24:37.105858 1747564 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0109 00:24:37.105864 1747564 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0109 00:24:37.105873 1747564 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0109 00:24:37.105880 1747564 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0109 00:24:37.105892 1747564 command_runner.go:130] > # always happen on a node reboot
	I0109 00:24:37.105899 1747564 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0109 00:24:37.105908 1747564 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0109 00:24:37.105915 1747564 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0109 00:24:37.105935 1747564 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0109 00:24:37.105941 1747564 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0109 00:24:37.105954 1747564 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0109 00:24:37.105963 1747564 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0109 00:24:37.105970 1747564 command_runner.go:130] > # internal_wipe = true
	I0109 00:24:37.105979 1747564 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0109 00:24:37.105989 1747564 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0109 00:24:37.105996 1747564 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0109 00:24:37.106017 1747564 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0109 00:24:37.106024 1747564 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0109 00:24:37.106031 1747564 command_runner.go:130] > [crio.api]
	I0109 00:24:37.106038 1747564 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0109 00:24:37.106043 1747564 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0109 00:24:37.106052 1747564 command_runner.go:130] > # IP address on which the stream server will listen.
	I0109 00:24:37.106057 1747564 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0109 00:24:37.106074 1747564 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0109 00:24:37.106084 1747564 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0109 00:24:37.106089 1747564 command_runner.go:130] > # stream_port = "0"
	I0109 00:24:37.106096 1747564 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0109 00:24:37.106101 1747564 command_runner.go:130] > # stream_enable_tls = false
	I0109 00:24:37.106112 1747564 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0109 00:24:37.106117 1747564 command_runner.go:130] > # stream_idle_timeout = ""
	I0109 00:24:37.106125 1747564 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0109 00:24:37.106136 1747564 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0109 00:24:37.106141 1747564 command_runner.go:130] > # minutes.
	I0109 00:24:37.106148 1747564 command_runner.go:130] > # stream_tls_cert = ""
	I0109 00:24:37.106156 1747564 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0109 00:24:37.106171 1747564 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0109 00:24:37.106176 1747564 command_runner.go:130] > # stream_tls_key = ""
	I0109 00:24:37.106184 1747564 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0109 00:24:37.106194 1747564 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0109 00:24:37.106200 1747564 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0109 00:24:37.106205 1747564 command_runner.go:130] > # stream_tls_ca = ""
	I0109 00:24:37.106218 1747564 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0109 00:24:37.106227 1747564 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0109 00:24:37.106236 1747564 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0109 00:24:37.106245 1747564 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0109 00:24:37.106262 1747564 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0109 00:24:37.106274 1747564 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0109 00:24:37.106279 1747564 command_runner.go:130] > [crio.runtime]
	I0109 00:24:37.106286 1747564 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0109 00:24:37.106296 1747564 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0109 00:24:37.106300 1747564 command_runner.go:130] > # "nofile=1024:2048"
	I0109 00:24:37.106312 1747564 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0109 00:24:37.106318 1747564 command_runner.go:130] > # default_ulimits = [
	I0109 00:24:37.106324 1747564 command_runner.go:130] > # ]
	I0109 00:24:37.106332 1747564 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0109 00:24:37.106336 1747564 command_runner.go:130] > # no_pivot = false
	I0109 00:24:37.106343 1747564 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0109 00:24:37.106353 1747564 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0109 00:24:37.106359 1747564 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0109 00:24:37.106369 1747564 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0109 00:24:37.106382 1747564 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0109 00:24:37.106390 1747564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0109 00:24:37.106398 1747564 command_runner.go:130] > # conmon = ""
	I0109 00:24:37.106403 1747564 command_runner.go:130] > # Cgroup setting for conmon
	I0109 00:24:37.106412 1747564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0109 00:24:37.106422 1747564 command_runner.go:130] > conmon_cgroup = "pod"
	I0109 00:24:37.106429 1747564 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0109 00:24:37.106448 1747564 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0109 00:24:37.106461 1747564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0109 00:24:37.106470 1747564 command_runner.go:130] > # conmon_env = [
	I0109 00:24:37.106474 1747564 command_runner.go:130] > # ]
	I0109 00:24:37.106485 1747564 command_runner.go:130] > # Additional environment variables to set for all the
	I0109 00:24:37.106492 1747564 command_runner.go:130] > # containers. These are overridden if set in the
	I0109 00:24:37.106501 1747564 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0109 00:24:37.106506 1747564 command_runner.go:130] > # default_env = [
	I0109 00:24:37.106510 1747564 command_runner.go:130] > # ]
	I0109 00:24:37.106520 1747564 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0109 00:24:37.106527 1747564 command_runner.go:130] > # selinux = false
	I0109 00:24:37.106537 1747564 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0109 00:24:37.106544 1747564 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0109 00:24:37.106553 1747564 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0109 00:24:37.106566 1747564 command_runner.go:130] > # seccomp_profile = ""
	I0109 00:24:37.106573 1747564 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0109 00:24:37.106580 1747564 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0109 00:24:37.106590 1747564 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0109 00:24:37.106604 1747564 command_runner.go:130] > # which might increase security.
	I0109 00:24:37.106613 1747564 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0109 00:24:37.106620 1747564 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0109 00:24:37.106628 1747564 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0109 00:24:37.106639 1747564 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0109 00:24:37.106648 1747564 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0109 00:24:37.106658 1747564 command_runner.go:130] > # This option supports live configuration reload.
	I0109 00:24:37.106663 1747564 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0109 00:24:37.106672 1747564 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0109 00:24:37.106677 1747564 command_runner.go:130] > # the cgroup blockio controller.
	I0109 00:24:37.106688 1747564 command_runner.go:130] > # blockio_config_file = ""
	I0109 00:24:37.106699 1747564 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0109 00:24:37.106704 1747564 command_runner.go:130] > # irqbalance daemon.
	I0109 00:24:37.106711 1747564 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0109 00:24:37.106720 1747564 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0109 00:24:37.106729 1747564 command_runner.go:130] > # This option supports live configuration reload.
	I0109 00:24:37.106736 1747564 command_runner.go:130] > # rdt_config_file = ""
	I0109 00:24:37.106742 1747564 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0109 00:24:37.106748 1747564 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0109 00:24:37.106757 1747564 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0109 00:24:37.106762 1747564 command_runner.go:130] > # separate_pull_cgroup = ""
	I0109 00:24:37.106772 1747564 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0109 00:24:37.106779 1747564 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0109 00:24:37.106784 1747564 command_runner.go:130] > # will be added.
	I0109 00:24:37.106789 1747564 command_runner.go:130] > # default_capabilities = [
	I0109 00:24:37.106796 1747564 command_runner.go:130] > # 	"CHOWN",
	I0109 00:24:37.106801 1747564 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0109 00:24:37.106807 1747564 command_runner.go:130] > # 	"FSETID",
	I0109 00:24:37.106815 1747564 command_runner.go:130] > # 	"FOWNER",
	I0109 00:24:37.106819 1747564 command_runner.go:130] > # 	"SETGID",
	I0109 00:24:37.106824 1747564 command_runner.go:130] > # 	"SETUID",
	I0109 00:24:37.106830 1747564 command_runner.go:130] > # 	"SETPCAP",
	I0109 00:24:37.106835 1747564 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0109 00:24:37.106842 1747564 command_runner.go:130] > # 	"KILL",
	I0109 00:24:37.106851 1747564 command_runner.go:130] > # ]
	I0109 00:24:37.106861 1747564 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0109 00:24:37.106871 1747564 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0109 00:24:37.106876 1747564 command_runner.go:130] > # add_inheritable_capabilities = true
	I0109 00:24:37.106886 1747564 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0109 00:24:37.106894 1747564 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0109 00:24:37.106899 1747564 command_runner.go:130] > # default_sysctls = [
	I0109 00:24:37.106905 1747564 command_runner.go:130] > # ]
	I0109 00:24:37.106911 1747564 command_runner.go:130] > # List of devices on the host that a
	I0109 00:24:37.106918 1747564 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0109 00:24:37.106926 1747564 command_runner.go:130] > # allowed_devices = [
	I0109 00:24:37.106934 1747564 command_runner.go:130] > # 	"/dev/fuse",
	I0109 00:24:37.106941 1747564 command_runner.go:130] > # ]
	I0109 00:24:37.106954 1747564 command_runner.go:130] > # List of additional devices. specified as
	I0109 00:24:37.106979 1747564 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0109 00:24:37.106991 1747564 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0109 00:24:37.106998 1747564 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0109 00:24:37.107006 1747564 command_runner.go:130] > # additional_devices = [
	I0109 00:24:37.107010 1747564 command_runner.go:130] > # ]
	I0109 00:24:37.107017 1747564 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0109 00:24:37.107024 1747564 command_runner.go:130] > # cdi_spec_dirs = [
	I0109 00:24:37.107031 1747564 command_runner.go:130] > # 	"/etc/cdi",
	I0109 00:24:37.107036 1747564 command_runner.go:130] > # 	"/var/run/cdi",
	I0109 00:24:37.107040 1747564 command_runner.go:130] > # ]
	I0109 00:24:37.107048 1747564 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0109 00:24:37.107058 1747564 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0109 00:24:37.107062 1747564 command_runner.go:130] > # Defaults to false.
	I0109 00:24:37.107070 1747564 command_runner.go:130] > # device_ownership_from_security_context = false
	I0109 00:24:37.107080 1747564 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0109 00:24:37.107090 1747564 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0109 00:24:37.107098 1747564 command_runner.go:130] > # hooks_dir = [
	I0109 00:24:37.107106 1747564 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0109 00:24:37.107111 1747564 command_runner.go:130] > # ]
	I0109 00:24:37.107121 1747564 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0109 00:24:37.107129 1747564 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0109 00:24:37.107137 1747564 command_runner.go:130] > # its default mounts from the following two files:
	I0109 00:24:37.107141 1747564 command_runner.go:130] > #
	I0109 00:24:37.107149 1747564 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0109 00:24:37.107159 1747564 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0109 00:24:37.107165 1747564 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0109 00:24:37.107172 1747564 command_runner.go:130] > #
	I0109 00:24:37.107180 1747564 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0109 00:24:37.107187 1747564 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0109 00:24:37.107195 1747564 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0109 00:24:37.107203 1747564 command_runner.go:130] > #      only add mounts it finds in this file.
	I0109 00:24:37.107207 1747564 command_runner.go:130] > #
	I0109 00:24:37.107212 1747564 command_runner.go:130] > # default_mounts_file = ""
	I0109 00:24:37.107221 1747564 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0109 00:24:37.107233 1747564 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0109 00:24:37.107241 1747564 command_runner.go:130] > # pids_limit = 0
	I0109 00:24:37.107248 1747564 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0109 00:24:37.107258 1747564 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0109 00:24:37.107265 1747564 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0109 00:24:37.107277 1747564 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0109 00:24:37.107285 1747564 command_runner.go:130] > # log_size_max = -1
	I0109 00:24:37.107294 1747564 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0109 00:24:37.107300 1747564 command_runner.go:130] > # log_to_journald = false
	I0109 00:24:37.107308 1747564 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0109 00:24:37.107317 1747564 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0109 00:24:37.107323 1747564 command_runner.go:130] > # Path to directory for container attach sockets.
	I0109 00:24:37.107332 1747564 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0109 00:24:37.107338 1747564 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0109 00:24:37.107343 1747564 command_runner.go:130] > # bind_mount_prefix = ""
	I0109 00:24:37.107349 1747564 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0109 00:24:37.107354 1747564 command_runner.go:130] > # read_only = false
	I0109 00:24:37.107364 1747564 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0109 00:24:37.107373 1747564 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0109 00:24:37.107380 1747564 command_runner.go:130] > # live configuration reload.
	I0109 00:24:37.107385 1747564 command_runner.go:130] > # log_level = "info"
	I0109 00:24:37.107392 1747564 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0109 00:24:37.107401 1747564 command_runner.go:130] > # This option supports live configuration reload.
	I0109 00:24:37.107405 1747564 command_runner.go:130] > # log_filter = ""
	I0109 00:24:37.107412 1747564 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0109 00:24:37.107420 1747564 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0109 00:24:37.107427 1747564 command_runner.go:130] > # separated by comma.
	I0109 00:24:37.107431 1747564 command_runner.go:130] > # uid_mappings = ""
	I0109 00:24:37.107439 1747564 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0109 00:24:37.107449 1747564 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0109 00:24:37.107453 1747564 command_runner.go:130] > # separated by comma.
	I0109 00:24:37.107458 1747564 command_runner.go:130] > # gid_mappings = ""
	I0109 00:24:37.107467 1747564 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0109 00:24:37.107478 1747564 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0109 00:24:37.107485 1747564 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0109 00:24:37.107492 1747564 command_runner.go:130] > # minimum_mappable_uid = -1
	I0109 00:24:37.107502 1747564 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0109 00:24:37.107511 1747564 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0109 00:24:37.107521 1747564 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0109 00:24:37.107528 1747564 command_runner.go:130] > # minimum_mappable_gid = -1
	I0109 00:24:37.107535 1747564 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0109 00:24:37.107543 1747564 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0109 00:24:37.107554 1747564 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0109 00:24:37.107563 1747564 command_runner.go:130] > # ctr_stop_timeout = 30
	I0109 00:24:37.107573 1747564 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0109 00:24:37.107582 1747564 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0109 00:24:37.107588 1747564 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0109 00:24:37.107596 1747564 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0109 00:24:37.107601 1747564 command_runner.go:130] > # drop_infra_ctr = true
	I0109 00:24:37.107608 1747564 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0109 00:24:37.107618 1747564 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0109 00:24:37.107626 1747564 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0109 00:24:37.107632 1747564 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0109 00:24:37.107643 1747564 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0109 00:24:37.107651 1747564 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0109 00:24:37.107659 1747564 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0109 00:24:37.107667 1747564 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0109 00:24:37.107674 1747564 command_runner.go:130] > # pinns_path = ""
	I0109 00:24:37.107681 1747564 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0109 00:24:37.107692 1747564 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0109 00:24:37.107700 1747564 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0109 00:24:37.107705 1747564 command_runner.go:130] > # default_runtime = "runc"
	I0109 00:24:37.107713 1747564 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0109 00:24:37.107722 1747564 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0109 00:24:37.107738 1747564 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0109 00:24:37.107745 1747564 command_runner.go:130] > # creation as a file is not desired either.
	I0109 00:24:37.107757 1747564 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0109 00:24:37.107763 1747564 command_runner.go:130] > # the hostname is being managed dynamically.
	I0109 00:24:37.107769 1747564 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0109 00:24:37.107777 1747564 command_runner.go:130] > # ]
	I0109 00:24:37.107784 1747564 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0109 00:24:37.107792 1747564 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0109 00:24:37.107804 1747564 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0109 00:24:37.107812 1747564 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0109 00:24:37.107821 1747564 command_runner.go:130] > #
	I0109 00:24:37.107827 1747564 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0109 00:24:37.107835 1747564 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0109 00:24:37.107840 1747564 command_runner.go:130] > #  runtime_type = "oci"
	I0109 00:24:37.107848 1747564 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0109 00:24:37.107855 1747564 command_runner.go:130] > #  privileged_without_host_devices = false
	I0109 00:24:37.107860 1747564 command_runner.go:130] > #  allowed_annotations = []
	I0109 00:24:37.107867 1747564 command_runner.go:130] > # Where:
	I0109 00:24:37.107873 1747564 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0109 00:24:37.107881 1747564 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0109 00:24:37.107891 1747564 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0109 00:24:37.107902 1747564 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0109 00:24:37.107907 1747564 command_runner.go:130] > #   in $PATH.
	I0109 00:24:37.107916 1747564 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0109 00:24:37.107929 1747564 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0109 00:24:37.107937 1747564 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0109 00:24:37.107945 1747564 command_runner.go:130] > #   state.
	I0109 00:24:37.107953 1747564 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0109 00:24:37.107962 1747564 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0109 00:24:37.107970 1747564 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0109 00:24:37.107979 1747564 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0109 00:24:37.107986 1747564 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0109 00:24:37.107994 1747564 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0109 00:24:37.108002 1747564 command_runner.go:130] > #   The currently recognized values are:
	I0109 00:24:37.108010 1747564 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0109 00:24:37.108019 1747564 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0109 00:24:37.108028 1747564 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0109 00:24:37.108036 1747564 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0109 00:24:37.108047 1747564 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0109 00:24:37.108055 1747564 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0109 00:24:37.108062 1747564 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0109 00:24:37.108072 1747564 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0109 00:24:37.108078 1747564 command_runner.go:130] > #   should be moved to the container's cgroup
	I0109 00:24:37.108083 1747564 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0109 00:24:37.108094 1747564 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0109 00:24:37.108101 1747564 command_runner.go:130] > runtime_type = "oci"
	I0109 00:24:37.108106 1747564 command_runner.go:130] > runtime_root = "/run/runc"
	I0109 00:24:37.108111 1747564 command_runner.go:130] > runtime_config_path = ""
	I0109 00:24:37.108116 1747564 command_runner.go:130] > monitor_path = ""
	I0109 00:24:37.108123 1747564 command_runner.go:130] > monitor_cgroup = ""
	I0109 00:24:37.108128 1747564 command_runner.go:130] > monitor_exec_cgroup = ""
	I0109 00:24:37.108165 1747564 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0109 00:24:37.108174 1747564 command_runner.go:130] > # running containers
	I0109 00:24:37.108183 1747564 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0109 00:24:37.108194 1747564 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0109 00:24:37.108202 1747564 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0109 00:24:37.108212 1747564 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0109 00:24:37.108218 1747564 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0109 00:24:37.108224 1747564 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0109 00:24:37.108229 1747564 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0109 00:24:37.108235 1747564 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0109 00:24:37.108244 1747564 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0109 00:24:37.108252 1747564 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0109 00:24:37.108260 1747564 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0109 00:24:37.108269 1747564 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0109 00:24:37.108277 1747564 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0109 00:24:37.108288 1747564 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0109 00:24:37.108300 1747564 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0109 00:24:37.108307 1747564 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0109 00:24:37.108318 1747564 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0109 00:24:37.108330 1747564 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0109 00:24:37.108337 1747564 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0109 00:24:37.108348 1747564 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0109 00:24:37.108354 1747564 command_runner.go:130] > # Example:
	I0109 00:24:37.108360 1747564 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0109 00:24:37.108369 1747564 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0109 00:24:37.108375 1747564 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0109 00:24:37.108381 1747564 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0109 00:24:37.108391 1747564 command_runner.go:130] > # cpuset = 0
	I0109 00:24:37.108395 1747564 command_runner.go:130] > # cpushares = "0-1"
	I0109 00:24:37.108402 1747564 command_runner.go:130] > # Where:
	I0109 00:24:37.108408 1747564 command_runner.go:130] > # The workload name is workload-type.
	I0109 00:24:37.108419 1747564 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0109 00:24:37.108425 1747564 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0109 00:24:37.108434 1747564 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0109 00:24:37.108447 1747564 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0109 00:24:37.108458 1747564 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0109 00:24:37.108462 1747564 command_runner.go:130] > # 
	I0109 00:24:37.108470 1747564 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0109 00:24:37.108476 1747564 command_runner.go:130] > #
	I0109 00:24:37.108483 1747564 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0109 00:24:37.108494 1747564 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0109 00:24:37.108501 1747564 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0109 00:24:37.108509 1747564 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0109 00:24:37.108518 1747564 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0109 00:24:37.108523 1747564 command_runner.go:130] > [crio.image]
	I0109 00:24:37.108534 1747564 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0109 00:24:37.108540 1747564 command_runner.go:130] > # default_transport = "docker://"
	I0109 00:24:37.108548 1747564 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0109 00:24:37.108556 1747564 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0109 00:24:37.108564 1747564 command_runner.go:130] > # global_auth_file = ""
	I0109 00:24:37.108570 1747564 command_runner.go:130] > # The image used to instantiate infra containers.
	I0109 00:24:37.108576 1747564 command_runner.go:130] > # This option supports live configuration reload.
	I0109 00:24:37.108585 1747564 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0109 00:24:37.108593 1747564 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0109 00:24:37.108611 1747564 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0109 00:24:37.108617 1747564 command_runner.go:130] > # This option supports live configuration reload.
	I0109 00:24:37.108627 1747564 command_runner.go:130] > # pause_image_auth_file = ""
	I0109 00:24:37.108634 1747564 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0109 00:24:37.108644 1747564 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0109 00:24:37.108651 1747564 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0109 00:24:37.108660 1747564 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0109 00:24:37.108668 1747564 command_runner.go:130] > # pause_command = "/pause"
	I0109 00:24:37.108675 1747564 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0109 00:24:37.108688 1747564 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0109 00:24:37.108695 1747564 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0109 00:24:37.108711 1747564 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0109 00:24:37.108720 1747564 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0109 00:24:37.108724 1747564 command_runner.go:130] > # signature_policy = ""
	I0109 00:24:37.108734 1747564 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0109 00:24:37.108746 1747564 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0109 00:24:37.108751 1747564 command_runner.go:130] > # changing them here.
	I0109 00:24:37.108757 1747564 command_runner.go:130] > # insecure_registries = [
	I0109 00:24:37.108765 1747564 command_runner.go:130] > # ]
	I0109 00:24:37.108772 1747564 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0109 00:24:37.108779 1747564 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0109 00:24:37.108786 1747564 command_runner.go:130] > # image_volumes = "mkdir"
	I0109 00:24:37.108792 1747564 command_runner.go:130] > # Temporary directory to use for storing big files
	I0109 00:24:37.108800 1747564 command_runner.go:130] > # big_files_temporary_dir = ""
	I0109 00:24:37.108807 1747564 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0109 00:24:37.108811 1747564 command_runner.go:130] > # CNI plugins.
	I0109 00:24:37.108818 1747564 command_runner.go:130] > [crio.network]
	I0109 00:24:37.108825 1747564 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0109 00:24:37.108835 1747564 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0109 00:24:37.108842 1747564 command_runner.go:130] > # cni_default_network = ""
	I0109 00:24:37.108853 1747564 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0109 00:24:37.108858 1747564 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0109 00:24:37.108865 1747564 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0109 00:24:37.108871 1747564 command_runner.go:130] > # plugin_dirs = [
	I0109 00:24:37.108878 1747564 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0109 00:24:37.108883 1747564 command_runner.go:130] > # ]
	I0109 00:24:37.108889 1747564 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0109 00:24:37.108896 1747564 command_runner.go:130] > [crio.metrics]
	I0109 00:24:37.108902 1747564 command_runner.go:130] > # Globally enable or disable metrics support.
	I0109 00:24:37.108913 1747564 command_runner.go:130] > # enable_metrics = false
	I0109 00:24:37.108918 1747564 command_runner.go:130] > # Specify enabled metrics collectors.
	I0109 00:24:37.108926 1747564 command_runner.go:130] > # Per default all metrics are enabled.
	I0109 00:24:37.108933 1747564 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0109 00:24:37.108943 1747564 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0109 00:24:37.108950 1747564 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0109 00:24:37.108955 1747564 command_runner.go:130] > # metrics_collectors = [
	I0109 00:24:37.108961 1747564 command_runner.go:130] > # 	"operations",
	I0109 00:24:37.108969 1747564 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0109 00:24:37.108976 1747564 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0109 00:24:37.108983 1747564 command_runner.go:130] > # 	"operations_errors",
	I0109 00:24:37.108988 1747564 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0109 00:24:37.108993 1747564 command_runner.go:130] > # 	"image_pulls_by_name",
	I0109 00:24:37.109001 1747564 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0109 00:24:37.109006 1747564 command_runner.go:130] > # 	"image_pulls_failures",
	I0109 00:24:37.109011 1747564 command_runner.go:130] > # 	"image_pulls_successes",
	I0109 00:24:37.109017 1747564 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0109 00:24:37.109023 1747564 command_runner.go:130] > # 	"image_layer_reuse",
	I0109 00:24:37.109028 1747564 command_runner.go:130] > # 	"containers_oom_total",
	I0109 00:24:37.109035 1747564 command_runner.go:130] > # 	"containers_oom",
	I0109 00:24:37.109040 1747564 command_runner.go:130] > # 	"processes_defunct",
	I0109 00:24:37.109047 1747564 command_runner.go:130] > # 	"operations_total",
	I0109 00:24:37.109052 1747564 command_runner.go:130] > # 	"operations_latency_seconds",
	I0109 00:24:37.109059 1747564 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0109 00:24:37.109067 1747564 command_runner.go:130] > # 	"operations_errors_total",
	I0109 00:24:37.109072 1747564 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0109 00:24:37.109079 1747564 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0109 00:24:37.109085 1747564 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0109 00:24:37.109092 1747564 command_runner.go:130] > # 	"image_pulls_success_total",
	I0109 00:24:37.109097 1747564 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0109 00:24:37.109102 1747564 command_runner.go:130] > # 	"containers_oom_count_total",
	I0109 00:24:37.109106 1747564 command_runner.go:130] > # ]
	I0109 00:24:37.109113 1747564 command_runner.go:130] > # The port on which the metrics server will listen.
	I0109 00:24:37.109120 1747564 command_runner.go:130] > # metrics_port = 9090
	I0109 00:24:37.109126 1747564 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0109 00:24:37.109130 1747564 command_runner.go:130] > # metrics_socket = ""
	I0109 00:24:37.109137 1747564 command_runner.go:130] > # The certificate for the secure metrics server.
	I0109 00:24:37.109148 1747564 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0109 00:24:37.109155 1747564 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0109 00:24:37.109168 1747564 command_runner.go:130] > # certificate on any modification event.
	I0109 00:24:37.109172 1747564 command_runner.go:130] > # metrics_cert = ""
	I0109 00:24:37.109179 1747564 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0109 00:24:37.109187 1747564 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0109 00:24:37.109191 1747564 command_runner.go:130] > # metrics_key = ""
	I0109 00:24:37.109203 1747564 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0109 00:24:37.109210 1747564 command_runner.go:130] > [crio.tracing]
	I0109 00:24:37.109217 1747564 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0109 00:24:37.109221 1747564 command_runner.go:130] > # enable_tracing = false
	I0109 00:24:37.109230 1747564 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0109 00:24:37.109237 1747564 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0109 00:24:37.109245 1747564 command_runner.go:130] > # Number of samples to collect per million spans.
	I0109 00:24:37.109251 1747564 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0109 00:24:37.109258 1747564 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0109 00:24:37.109265 1747564 command_runner.go:130] > [crio.stats]
	I0109 00:24:37.109272 1747564 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0109 00:24:37.109280 1747564 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0109 00:24:37.109286 1747564 command_runner.go:130] > # stats_collection_period = 0
	I0109 00:24:37.109798 1747564 cni.go:84] Creating CNI manager for ""
	I0109 00:24:37.109820 1747564 cni.go:136] 1 nodes found, recommending kindnet
	I0109 00:24:37.109846 1747564 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:24:37.109885 1747564 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-979047 NodeName:multinode-979047 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0109 00:24:37.110053 1747564 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-979047"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:24:37.110136 1747564 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-979047 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-979047 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:24:37.110224 1747564 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0109 00:24:37.120350 1747564 command_runner.go:130] > kubeadm
	I0109 00:24:37.120370 1747564 command_runner.go:130] > kubectl
	I0109 00:24:37.120375 1747564 command_runner.go:130] > kubelet
	I0109 00:24:37.121494 1747564 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:24:37.121578 1747564 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:24:37.132468 1747564 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0109 00:24:37.153338 1747564 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0109 00:24:37.174459 1747564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0109 00:24:37.195583 1747564 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0109 00:24:37.200060 1747564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:24:37.212801 1747564 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047 for IP: 192.168.58.2
	I0109 00:24:37.212843 1747564 certs.go:190] acquiring lock for shared ca certs: {Name:mkd1a8a8c523b20f31a5839efb0f14edb2634692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:24:37.212977 1747564 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.key
	I0109 00:24:37.213026 1747564 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-1678586/.minikube/proxy-client-ca.key
	I0109 00:24:37.213077 1747564 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/client.key
	I0109 00:24:37.213093 1747564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/client.crt with IP's: []
	I0109 00:24:37.627620 1747564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/client.crt ...
	I0109 00:24:37.627656 1747564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/client.crt: {Name:mkcba13a4de4cb75b6caa21916c30087718e6c56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:24:37.627865 1747564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/client.key ...
	I0109 00:24:37.627879 1747564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/client.key: {Name:mke0f09cdaa88ef9971fe234d4b93eb8981a85c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:24:37.627962 1747564 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/apiserver.key.cee25041
	I0109 00:24:37.627980 1747564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0109 00:24:38.365345 1747564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/apiserver.crt.cee25041 ...
	I0109 00:24:38.365381 1747564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/apiserver.crt.cee25041: {Name:mke8d953cf5461a58d0a4010a2d846514e4efae9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:24:38.365612 1747564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/apiserver.key.cee25041 ...
	I0109 00:24:38.365636 1747564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/apiserver.key.cee25041: {Name:mk92ba0c91b055385d27e26248f28081c051bcd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:24:38.365734 1747564 certs.go:337] copying /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/apiserver.crt
	I0109 00:24:38.365836 1747564 certs.go:341] copying /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/apiserver.key
	I0109 00:24:38.365898 1747564 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/proxy-client.key
	I0109 00:24:38.365915 1747564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/proxy-client.crt with IP's: []
	I0109 00:24:38.946513 1747564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/proxy-client.crt ...
	I0109 00:24:38.946543 1747564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/proxy-client.crt: {Name:mk720582aa59eccd0dc7e2d2641cc918fe76c09e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:24:38.946729 1747564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/proxy-client.key ...
	I0109 00:24:38.946743 1747564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/proxy-client.key: {Name:mk1e48c8b44842ebc2af51b7fe98a63a85075e74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:24:38.946826 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0109 00:24:38.946853 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0109 00:24:38.946867 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0109 00:24:38.946882 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0109 00:24:38.946894 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0109 00:24:38.946910 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0109 00:24:38.946921 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0109 00:24:38.946936 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0109 00:24:38.946998 1747564 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/1683967.pem (1338 bytes)
	W0109 00:24:38.947038 1747564 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/1683967_empty.pem, impossibly tiny 0 bytes
	I0109 00:24:38.947051 1747564 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca-key.pem (1679 bytes)
	I0109 00:24:38.947077 1747564 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem (1082 bytes)
	I0109 00:24:38.947110 1747564 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem (1123 bytes)
	I0109 00:24:38.947139 1747564 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem (1679 bytes)
	I0109 00:24:38.947191 1747564 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem (1708 bytes)
	I0109 00:24:38.947221 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:24:38.947236 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/1683967.pem -> /usr/share/ca-certificates/1683967.pem
	I0109 00:24:38.947255 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem -> /usr/share/ca-certificates/16839672.pem
	I0109 00:24:38.947830 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:24:38.976376 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0109 00:24:39.005554 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:24:39.035688 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0109 00:24:39.065221 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:24:39.095265 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0109 00:24:39.124906 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:24:39.153388 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:24:39.181750 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:24:39.209745 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/1683967.pem --> /usr/share/ca-certificates/1683967.pem (1338 bytes)
	I0109 00:24:39.237407 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem --> /usr/share/ca-certificates/16839672.pem (1708 bytes)
	I0109 00:24:39.265562 1747564 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:24:39.287147 1747564 ssh_runner.go:195] Run: openssl version
	I0109 00:24:39.296935 1747564 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0109 00:24:39.297219 1747564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:24:39.309472 1747564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:24:39.313953 1747564 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  9 00:02 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:24:39.313983 1747564 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  9 00:02 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:24:39.314039 1747564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:24:39.322116 1747564 command_runner.go:130] > b5213941
	I0109 00:24:39.322634 1747564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:24:39.333900 1747564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1683967.pem && ln -fs /usr/share/ca-certificates/1683967.pem /etc/ssl/certs/1683967.pem"
	I0109 00:24:39.345083 1747564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1683967.pem
	I0109 00:24:39.349837 1747564 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  9 00:09 /usr/share/ca-certificates/1683967.pem
	I0109 00:24:39.349863 1747564 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  9 00:09 /usr/share/ca-certificates/1683967.pem
	I0109 00:24:39.349914 1747564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1683967.pem
	I0109 00:24:39.357939 1747564 command_runner.go:130] > 51391683
	I0109 00:24:39.358323 1747564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1683967.pem /etc/ssl/certs/51391683.0"
	I0109 00:24:39.369529 1747564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16839672.pem && ln -fs /usr/share/ca-certificates/16839672.pem /etc/ssl/certs/16839672.pem"
	I0109 00:24:39.380832 1747564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16839672.pem
	I0109 00:24:39.385646 1747564 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  9 00:09 /usr/share/ca-certificates/16839672.pem
	I0109 00:24:39.385675 1747564 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  9 00:09 /usr/share/ca-certificates/16839672.pem
	I0109 00:24:39.385766 1747564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16839672.pem
	I0109 00:24:39.393727 1747564 command_runner.go:130] > 3ec20f2e
	I0109 00:24:39.394169 1747564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16839672.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:24:39.405663 1747564 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:24:39.409832 1747564 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0109 00:24:39.409864 1747564 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0109 00:24:39.409901 1747564 kubeadm.go:404] StartCluster: {Name:multinode-979047 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-979047 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:24:39.409980 1747564 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0109 00:24:39.410042 1747564 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:24:39.458639 1747564 cri.go:89] found id: ""
	I0109 00:24:39.458711 1747564 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:24:39.468877 1747564 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0109 00:24:39.468936 1747564 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0109 00:24:39.468960 1747564 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0109 00:24:39.469191 1747564 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:24:39.479790 1747564 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0109 00:24:39.479877 1747564 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:24:39.488965 1747564 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0109 00:24:39.488992 1747564 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0109 00:24:39.489024 1747564 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0109 00:24:39.489043 1747564 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:24:39.490091 1747564 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:24:39.490144 1747564 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0109 00:24:39.541375 1747564 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0109 00:24:39.541446 1747564 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0109 00:24:39.541561 1747564 kubeadm.go:322] [preflight] Running pre-flight checks
	I0109 00:24:39.541604 1747564 command_runner.go:130] > [preflight] Running pre-flight checks
	I0109 00:24:39.584770 1747564 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0109 00:24:39.584837 1747564 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0109 00:24:39.584943 1747564 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0109 00:24:39.584971 1747564 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1051-aws
	I0109 00:24:39.585034 1747564 kubeadm.go:322] OS: Linux
	I0109 00:24:39.585060 1747564 command_runner.go:130] > OS: Linux
	I0109 00:24:39.585132 1747564 kubeadm.go:322] CGROUPS_CPU: enabled
	I0109 00:24:39.585158 1747564 command_runner.go:130] > CGROUPS_CPU: enabled
	I0109 00:24:39.585233 1747564 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0109 00:24:39.585258 1747564 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0109 00:24:39.585332 1747564 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0109 00:24:39.585353 1747564 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0109 00:24:39.585427 1747564 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0109 00:24:39.585453 1747564 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0109 00:24:39.585528 1747564 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0109 00:24:39.585554 1747564 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0109 00:24:39.585629 1747564 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0109 00:24:39.585655 1747564 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0109 00:24:39.585728 1747564 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0109 00:24:39.585746 1747564 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0109 00:24:39.585821 1747564 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0109 00:24:39.585847 1747564 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0109 00:24:39.585922 1747564 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0109 00:24:39.585948 1747564 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0109 00:24:39.672110 1747564 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0109 00:24:39.672177 1747564 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0109 00:24:39.672292 1747564 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0109 00:24:39.672315 1747564 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0109 00:24:39.672425 1747564 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0109 00:24:39.672448 1747564 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0109 00:24:39.910779 1747564 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0109 00:24:39.915824 1747564 out.go:204]   - Generating certificates and keys ...
	I0109 00:24:39.911148 1747564 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0109 00:24:39.916009 1747564 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0109 00:24:39.916028 1747564 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0109 00:24:39.916090 1747564 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0109 00:24:39.916100 1747564 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0109 00:24:40.197439 1747564 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0109 00:24:40.197470 1747564 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0109 00:24:40.540751 1747564 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0109 00:24:40.540775 1747564 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0109 00:24:40.876090 1747564 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0109 00:24:40.876117 1747564 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0109 00:24:41.257941 1747564 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0109 00:24:41.257973 1747564 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0109 00:24:41.747559 1747564 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0109 00:24:41.747634 1747564 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0109 00:24:41.747934 1747564 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-979047] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0109 00:24:41.747977 1747564 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-979047] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0109 00:24:42.115167 1747564 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0109 00:24:42.115194 1747564 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0109 00:24:42.115508 1747564 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-979047] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0109 00:24:42.115520 1747564 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-979047] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0109 00:24:42.575522 1747564 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0109 00:24:42.575553 1747564 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0109 00:24:42.830637 1747564 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0109 00:24:42.830666 1747564 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0109 00:24:43.553932 1747564 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0109 00:24:43.553969 1747564 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0109 00:24:43.554405 1747564 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0109 00:24:43.554421 1747564 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0109 00:24:44.462260 1747564 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0109 00:24:44.462291 1747564 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0109 00:24:44.968067 1747564 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0109 00:24:44.968093 1747564 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0109 00:24:45.808247 1747564 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0109 00:24:45.808279 1747564 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0109 00:24:45.984731 1747564 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0109 00:24:45.984762 1747564 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0109 00:24:45.985471 1747564 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0109 00:24:45.985494 1747564 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0109 00:24:45.988138 1747564 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0109 00:24:45.990917 1747564 out.go:204]   - Booting up control plane ...
	I0109 00:24:45.988233 1747564 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0109 00:24:45.991016 1747564 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0109 00:24:45.991032 1747564 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0109 00:24:45.991103 1747564 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0109 00:24:45.991112 1747564 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0109 00:24:45.993432 1747564 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0109 00:24:45.993460 1747564 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0109 00:24:46.004957 1747564 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:24:46.004989 1747564 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:24:46.005977 1747564 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:24:46.006003 1747564 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:24:46.006239 1747564 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0109 00:24:46.006261 1747564 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0109 00:24:46.109904 1747564 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0109 00:24:46.109934 1747564 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0109 00:24:53.112327 1747564 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002451 seconds
	I0109 00:24:53.112355 1747564 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.002451 seconds
	I0109 00:24:53.112490 1747564 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0109 00:24:53.112529 1747564 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0109 00:24:53.127064 1747564 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0109 00:24:53.127089 1747564 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0109 00:24:53.654596 1747564 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0109 00:24:53.654612 1747564 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0109 00:24:53.654901 1747564 kubeadm.go:322] [mark-control-plane] Marking the node multinode-979047 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0109 00:24:53.654918 1747564 command_runner.go:130] > [mark-control-plane] Marking the node multinode-979047 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0109 00:24:54.166281 1747564 kubeadm.go:322] [bootstrap-token] Using token: dbf554.69ehfzptp84kaoza
	I0109 00:24:54.168456 1747564 out.go:204]   - Configuring RBAC rules ...
	I0109 00:24:54.166420 1747564 command_runner.go:130] > [bootstrap-token] Using token: dbf554.69ehfzptp84kaoza
	I0109 00:24:54.168573 1747564 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0109 00:24:54.168589 1747564 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0109 00:24:54.173563 1747564 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0109 00:24:54.173583 1747564 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0109 00:24:54.183407 1747564 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0109 00:24:54.183428 1747564 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0109 00:24:54.188952 1747564 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0109 00:24:54.188974 1747564 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0109 00:24:54.192070 1747564 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0109 00:24:54.192093 1747564 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0109 00:24:54.196429 1747564 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0109 00:24:54.196452 1747564 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0109 00:24:54.208729 1747564 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0109 00:24:54.208765 1747564 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0109 00:24:54.436913 1747564 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0109 00:24:54.436938 1747564 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0109 00:24:54.612587 1747564 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0109 00:24:54.612609 1747564 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0109 00:24:54.612615 1747564 kubeadm.go:322] 
	I0109 00:24:54.612672 1747564 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0109 00:24:54.612677 1747564 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0109 00:24:54.612681 1747564 kubeadm.go:322] 
	I0109 00:24:54.612754 1747564 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0109 00:24:54.612759 1747564 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0109 00:24:54.612763 1747564 kubeadm.go:322] 
	I0109 00:24:54.612787 1747564 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0109 00:24:54.612792 1747564 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0109 00:24:54.612847 1747564 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0109 00:24:54.612852 1747564 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0109 00:24:54.612899 1747564 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0109 00:24:54.612904 1747564 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0109 00:24:54.612908 1747564 kubeadm.go:322] 
	I0109 00:24:54.612959 1747564 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0109 00:24:54.612963 1747564 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0109 00:24:54.612967 1747564 kubeadm.go:322] 
	I0109 00:24:54.613012 1747564 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0109 00:24:54.613017 1747564 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0109 00:24:54.613021 1747564 kubeadm.go:322] 
	I0109 00:24:54.613069 1747564 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0109 00:24:54.613074 1747564 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0109 00:24:54.613144 1747564 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0109 00:24:54.613148 1747564 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0109 00:24:54.613212 1747564 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0109 00:24:54.613216 1747564 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0109 00:24:54.613221 1747564 kubeadm.go:322] 
	I0109 00:24:54.613300 1747564 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0109 00:24:54.613305 1747564 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0109 00:24:54.613376 1747564 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0109 00:24:54.613381 1747564 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0109 00:24:54.613385 1747564 kubeadm.go:322] 
	I0109 00:24:54.613464 1747564 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token dbf554.69ehfzptp84kaoza \
	I0109 00:24:54.613479 1747564 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token dbf554.69ehfzptp84kaoza \
	I0109 00:24:54.613576 1747564 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2f5d2b90e0873ecdcc03ee1f37a9ff73145aa86994d578f7f9f8008617cee046 \
	I0109 00:24:54.613581 1747564 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:2f5d2b90e0873ecdcc03ee1f37a9ff73145aa86994d578f7f9f8008617cee046 \
	I0109 00:24:54.613600 1747564 kubeadm.go:322] 	--control-plane 
	I0109 00:24:54.613604 1747564 command_runner.go:130] > 	--control-plane 
	I0109 00:24:54.613608 1747564 kubeadm.go:322] 
	I0109 00:24:54.613687 1747564 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0109 00:24:54.613703 1747564 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0109 00:24:54.613708 1747564 kubeadm.go:322] 
	I0109 00:24:54.613785 1747564 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dbf554.69ehfzptp84kaoza \
	I0109 00:24:54.613790 1747564 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token dbf554.69ehfzptp84kaoza \
	I0109 00:24:54.613889 1747564 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2f5d2b90e0873ecdcc03ee1f37a9ff73145aa86994d578f7f9f8008617cee046 
	I0109 00:24:54.613895 1747564 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:2f5d2b90e0873ecdcc03ee1f37a9ff73145aa86994d578f7f9f8008617cee046 
	I0109 00:24:54.617503 1747564 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0109 00:24:54.617525 1747564 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0109 00:24:54.617633 1747564 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0109 00:24:54.617639 1747564 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0109 00:24:54.617711 1747564 cni.go:84] Creating CNI manager for ""
	I0109 00:24:54.617732 1747564 cni.go:136] 1 nodes found, recommending kindnet
	I0109 00:24:54.621652 1747564 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0109 00:24:54.623950 1747564 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0109 00:24:54.635728 1747564 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0109 00:24:54.635750 1747564 command_runner.go:130] >   Size: 4030506   	Blocks: 7880       IO Block: 4096   regular file
	I0109 00:24:54.635758 1747564 command_runner.go:130] > Device: 3ah/58d	Inode: 2086842     Links: 1
	I0109 00:24:54.635765 1747564 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0109 00:24:54.635772 1747564 command_runner.go:130] > Access: 2023-12-04 16:39:54.000000000 +0000
	I0109 00:24:54.635778 1747564 command_runner.go:130] > Modify: 2023-12-04 16:39:54.000000000 +0000
	I0109 00:24:54.635784 1747564 command_runner.go:130] > Change: 2024-01-09 00:01:34.410757867 +0000
	I0109 00:24:54.635790 1747564 command_runner.go:130] >  Birth: 2024-01-09 00:01:34.366757483 +0000
	I0109 00:24:54.640192 1747564 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0109 00:24:54.640211 1747564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0109 00:24:54.691816 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0109 00:24:55.549826 1747564 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0109 00:24:55.560448 1747564 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0109 00:24:55.568373 1747564 command_runner.go:130] > serviceaccount/kindnet created
	I0109 00:24:55.579383 1747564 command_runner.go:130] > daemonset.apps/kindnet created
	I0109 00:24:55.585045 1747564 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0109 00:24:55.585125 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:24:55.585165 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=multinode-979047 minikube.k8s.io/updated_at=2024_01_09T00_24_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:24:55.745568 1747564 command_runner.go:130] > node/multinode-979047 labeled
	I0109 00:24:55.749943 1747564 command_runner.go:130] > -16
	I0109 00:24:55.764645 1747564 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0109 00:24:55.768443 1747564 ops.go:34] apiserver oom_adj: -16
	I0109 00:24:55.768533 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:24:55.874123 1747564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:24:56.268665 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:24:56.359502 1747564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:24:56.769191 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:24:56.858452 1747564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:24:57.268731 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:24:57.357768 1747564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:24:57.769176 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:24:57.866107 1747564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:24:58.268623 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:24:58.361870 1747564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:24:58.769445 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:24:58.862721 1747564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:24:59.268726 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:24:59.361958 1747564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:24:59.769290 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:24:59.860850 1747564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:25:00.269288 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:25:00.370932 1747564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:25:00.769493 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:25:00.864886 1747564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:25:01.269178 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:25:01.366017 1747564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:25:01.769031 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:25:01.876854 1747564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:25:02.268869 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:25:02.360922 1747564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:25:02.769547 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:25:02.862468 1747564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:25:03.268645 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:25:03.356121 1747564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:25:03.769505 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:25:03.857779 1747564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:25:04.269644 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:25:04.361031 1747564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:25:04.769266 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:25:04.859314 1747564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:25:05.268682 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:25:05.359366 1747564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:25:05.769200 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:25:05.875203 1747564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:25:06.269649 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:25:06.367574 1747564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:25:06.769630 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:25:06.869296 1747564 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0109 00:25:07.268734 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:25:07.417081 1747564 command_runner.go:130] > NAME      SECRETS   AGE
	I0109 00:25:07.417101 1747564 command_runner.go:130] > default   0         0s
	I0109 00:25:07.420947 1747564 kubeadm.go:1088] duration metric: took 11.835889552s to wait for elevateKubeSystemPrivileges.
	I0109 00:25:07.420972 1747564 kubeadm.go:406] StartCluster complete in 28.011073839s
	I0109 00:25:07.420988 1747564 settings.go:142] acquiring lock: {Name:mk0f4be07809726b91ed42aaaa2120516a2004e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:25:07.421045 1747564 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-1678586/kubeconfig
	I0109 00:25:07.421678 1747564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/kubeconfig: {Name:mkd692fadb6f1e94cc8cf2ddbb66429fa6c0e8fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:25:07.422168 1747564 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17830-1678586/kubeconfig
	I0109 00:25:07.422565 1747564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0109 00:25:07.422821 1747564 config.go:182] Loaded profile config "multinode-979047": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:25:07.422993 1747564 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0109 00:25:07.423054 1747564 addons.go:69] Setting storage-provisioner=true in profile "multinode-979047"
	I0109 00:25:07.423072 1747564 addons.go:237] Setting addon storage-provisioner=true in "multinode-979047"
	I0109 00:25:07.423122 1747564 host.go:66] Checking if "multinode-979047" exists ...
	I0109 00:25:07.423571 1747564 cli_runner.go:164] Run: docker container inspect multinode-979047 --format={{.State.Status}}
	I0109 00:25:07.422928 1747564 kapi.go:59] client config for multinode-979047: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/client.key", CAFile:"/home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0109 00:25:07.424718 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0109 00:25:07.424761 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:07.424785 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:07.424808 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:07.425035 1747564 cert_rotation.go:137] Starting client certificate rotation controller
	I0109 00:25:07.425467 1747564 addons.go:69] Setting default-storageclass=true in profile "multinode-979047"
	I0109 00:25:07.425508 1747564 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-979047"
	I0109 00:25:07.425825 1747564 cli_runner.go:164] Run: docker container inspect multinode-979047 --format={{.State.Status}}
	I0109 00:25:07.486968 1747564 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17830-1678586/kubeconfig
	I0109 00:25:07.487233 1747564 kapi.go:59] client config for multinode-979047: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/client.key", CAFile:"/home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0109 00:25:07.487493 1747564 addons.go:237] Setting addon default-storageclass=true in "multinode-979047"
	I0109 00:25:07.487523 1747564 host.go:66] Checking if "multinode-979047" exists ...
	I0109 00:25:07.487970 1747564 cli_runner.go:164] Run: docker container inspect multinode-979047 --format={{.State.Status}}
	I0109 00:25:07.494331 1747564 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:25:07.498664 1747564 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:25:07.498688 1747564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0109 00:25:07.498756 1747564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-979047
	I0109 00:25:07.517718 1747564 round_trippers.go:574] Response Status: 200 OK in 92 milliseconds
	I0109 00:25:07.517752 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:07.517764 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:07 GMT
	I0109 00:25:07.517770 1747564 round_trippers.go:580]     Audit-Id: 60b28aca-1644-46b2-b774-03997ac795d2
	I0109 00:25:07.517777 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:07.517784 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:07.517790 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:07.517796 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:07.517804 1747564 round_trippers.go:580]     Content-Length: 291
	I0109 00:25:07.517837 1747564 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9fa0a87b-3794-41d2-9f6b-1de3c8c3d9c9","resourceVersion":"353","creationTimestamp":"2024-01-09T00:24:54Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0109 00:25:07.518625 1747564 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9fa0a87b-3794-41d2-9f6b-1de3c8c3d9c9","resourceVersion":"353","creationTimestamp":"2024-01-09T00:24:54Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0109 00:25:07.518690 1747564 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0109 00:25:07.518700 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:07.518709 1747564 round_trippers.go:473]     Content-Type: application/json
	I0109 00:25:07.518716 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:07.518730 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:07.530659 1747564 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0109 00:25:07.530678 1747564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0109 00:25:07.530743 1747564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-979047
	I0109 00:25:07.557077 1747564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34444 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/multinode-979047/id_rsa Username:docker}
	I0109 00:25:07.580777 1747564 round_trippers.go:574] Response Status: 200 OK in 62 milliseconds
	I0109 00:25:07.580799 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:07.580807 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:07.580814 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:07.580821 1747564 round_trippers.go:580]     Content-Length: 291
	I0109 00:25:07.580827 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:07 GMT
	I0109 00:25:07.580833 1747564 round_trippers.go:580]     Audit-Id: 2d203517-f35e-426a-b61e-7282ff748abc
	I0109 00:25:07.580839 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:07.580845 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:07.580868 1747564 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9fa0a87b-3794-41d2-9f6b-1de3c8c3d9c9","resourceVersion":"379","creationTimestamp":"2024-01-09T00:24:54Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0109 00:25:07.583852 1747564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34444 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/multinode-979047/id_rsa Username:docker}
	I0109 00:25:07.688561 1747564 command_runner.go:130] > apiVersion: v1
	I0109 00:25:07.688626 1747564 command_runner.go:130] > data:
	I0109 00:25:07.688660 1747564 command_runner.go:130] >   Corefile: |
	I0109 00:25:07.688679 1747564 command_runner.go:130] >     .:53 {
	I0109 00:25:07.688716 1747564 command_runner.go:130] >         errors
	I0109 00:25:07.688741 1747564 command_runner.go:130] >         health {
	I0109 00:25:07.688760 1747564 command_runner.go:130] >            lameduck 5s
	I0109 00:25:07.688779 1747564 command_runner.go:130] >         }
	I0109 00:25:07.688815 1747564 command_runner.go:130] >         ready
	I0109 00:25:07.688844 1747564 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0109 00:25:07.688863 1747564 command_runner.go:130] >            pods insecure
	I0109 00:25:07.688882 1747564 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0109 00:25:07.688912 1747564 command_runner.go:130] >            ttl 30
	I0109 00:25:07.688932 1747564 command_runner.go:130] >         }
	I0109 00:25:07.688957 1747564 command_runner.go:130] >         prometheus :9153
	I0109 00:25:07.688976 1747564 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0109 00:25:07.689009 1747564 command_runner.go:130] >            max_concurrent 1000
	I0109 00:25:07.689030 1747564 command_runner.go:130] >         }
	I0109 00:25:07.689048 1747564 command_runner.go:130] >         cache 30
	I0109 00:25:07.689067 1747564 command_runner.go:130] >         loop
	I0109 00:25:07.689086 1747564 command_runner.go:130] >         reload
	I0109 00:25:07.689115 1747564 command_runner.go:130] >         loadbalance
	I0109 00:25:07.689140 1747564 command_runner.go:130] >     }
	I0109 00:25:07.689160 1747564 command_runner.go:130] > kind: ConfigMap
	I0109 00:25:07.689179 1747564 command_runner.go:130] > metadata:
	I0109 00:25:07.689201 1747564 command_runner.go:130] >   creationTimestamp: "2024-01-09T00:24:54Z"
	I0109 00:25:07.689234 1747564 command_runner.go:130] >   name: coredns
	I0109 00:25:07.689259 1747564 command_runner.go:130] >   namespace: kube-system
	I0109 00:25:07.689286 1747564 command_runner.go:130] >   resourceVersion: "267"
	I0109 00:25:07.689307 1747564 command_runner.go:130] >   uid: 060c27e1-1339-4bfa-8338-189eaa843fab
	I0109 00:25:07.689481 1747564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0109 00:25:07.768482 1747564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0109 00:25:07.791786 1747564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:25:07.925080 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0109 00:25:07.925141 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:07.925164 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:07.925186 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:07.938827 1747564 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0109 00:25:07.938909 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:07.938933 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:07.938954 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:07.938984 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:07.939010 1747564 round_trippers.go:580]     Content-Length: 291
	I0109 00:25:07.939031 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:07 GMT
	I0109 00:25:07.939053 1747564 round_trippers.go:580]     Audit-Id: dda09b43-ec5b-4c2f-abca-daddb7c13d23
	I0109 00:25:07.939084 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:07.940889 1747564 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9fa0a87b-3794-41d2-9f6b-1de3c8c3d9c9","resourceVersion":"401","creationTimestamp":"2024-01-09T00:24:54Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0109 00:25:07.941042 1747564 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-979047" context rescaled to 1 replicas
	I0109 00:25:07.941089 1747564 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:25:07.944354 1747564 out.go:177] * Verifying Kubernetes components...
	I0109 00:25:07.946412 1747564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:25:08.254516 1747564 command_runner.go:130] > configmap/coredns replaced
	I0109 00:25:08.264434 1747564 start.go:929] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0109 00:25:08.264464 1747564 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0109 00:25:08.264649 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0109 00:25:08.264678 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:08.264701 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:08.264726 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:08.269066 1747564 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0109 00:25:08.269136 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:08.269157 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:08.269179 1747564 round_trippers.go:580]     Content-Length: 1273
	I0109 00:25:08.269212 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:08 GMT
	I0109 00:25:08.269239 1747564 round_trippers.go:580]     Audit-Id: d120de0f-27b1-4f3d-81b8-28d2d2ab1378
	I0109 00:25:08.269260 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:08.269282 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:08.269313 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:08.269358 1747564 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"403"},"items":[{"metadata":{"name":"standard","uid":"cd60ec76-da66-4ced-a7e6-e7f4db65b35b","resourceVersion":"402","creationTimestamp":"2024-01-09T00:25:08Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-09T00:25:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0109 00:25:08.269775 1747564 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"cd60ec76-da66-4ced-a7e6-e7f4db65b35b","resourceVersion":"402","creationTimestamp":"2024-01-09T00:25:08Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-09T00:25:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0109 00:25:08.269848 1747564 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0109 00:25:08.269870 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:08.269904 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:08.269928 1747564 round_trippers.go:473]     Content-Type: application/json
	I0109 00:25:08.269949 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:08.273226 1747564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:25:08.273278 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:08.273298 1747564 round_trippers.go:580]     Audit-Id: acd56079-eb9a-4a19-967b-e6b90b855627
	I0109 00:25:08.273320 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:08.273353 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:08.273376 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:08.273396 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:08.273417 1747564 round_trippers.go:580]     Content-Length: 1220
	I0109 00:25:08.273438 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:08 GMT
	I0109 00:25:08.273687 1747564 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"cd60ec76-da66-4ced-a7e6-e7f4db65b35b","resourceVersion":"402","creationTimestamp":"2024-01-09T00:25:08Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-09T00:25:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0109 00:25:08.410349 1747564 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0109 00:25:08.416549 1747564 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0109 00:25:08.427041 1747564 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0109 00:25:08.436968 1747564 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0109 00:25:08.446197 1747564 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0109 00:25:08.458473 1747564 command_runner.go:130] > pod/storage-provisioner created
	I0109 00:25:08.467034 1747564 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0109 00:25:08.465002 1747564 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17830-1678586/kubeconfig
	I0109 00:25:08.469032 1747564 addons.go:508] enable addons completed in 1.046030437s: enabled=[default-storageclass storage-provisioner]
	I0109 00:25:08.469412 1747564 kapi.go:59] client config for multinode-979047: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/client.key", CAFile:"/home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0109 00:25:08.469681 1747564 node_ready.go:35] waiting up to 6m0s for node "multinode-979047" to be "Ready" ...
	I0109 00:25:08.469781 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:25:08.469792 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:08.469801 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:08.469808 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:08.472338 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:08.472359 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:08.472367 1747564 round_trippers.go:580]     Audit-Id: dc88b2bd-cc5a-48fa-b09c-2f9c27bc84ee
	I0109 00:25:08.472377 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:08.472384 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:08.472390 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:08.472399 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:08.472406 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:08 GMT
	I0109 00:25:08.472679 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"349","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0109 00:25:08.969952 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:25:08.969978 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:08.969987 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:08.969994 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:08.972457 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:08.972475 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:08.972484 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:08.972490 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:08 GMT
	I0109 00:25:08.972496 1747564 round_trippers.go:580]     Audit-Id: 861ffa23-b468-4d3d-aa84-ba7357e295ea
	I0109 00:25:08.972502 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:08.972509 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:08.972515 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:08.972713 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"349","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0109 00:25:09.469933 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:25:09.469959 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:09.469969 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:09.469977 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:09.472477 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:09.472497 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:09.472506 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:09.472513 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:09.472519 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:09.472526 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:09 GMT
	I0109 00:25:09.472532 1747564 round_trippers.go:580]     Audit-Id: 7b2bb8aa-0bb9-4144-aed8-2172284f74bc
	I0109 00:25:09.472538 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:09.472694 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"349","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0109 00:25:09.969947 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:25:09.969973 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:09.969983 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:09.969994 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:09.972558 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:09.972625 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:09.972647 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:09.972669 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:09.972704 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:09.972730 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:09 GMT
	I0109 00:25:09.972752 1747564 round_trippers.go:580]     Audit-Id: ca7e0443-5881-463b-971d-eeb21e35f560
	I0109 00:25:09.972778 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:09.972912 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"349","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0109 00:25:10.469952 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:25:10.469978 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:10.469988 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:10.469995 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:10.472609 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:10.472678 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:10.472700 1747564 round_trippers.go:580]     Audit-Id: 23d5d1f9-0578-4964-9c56-83d03cf368f6
	I0109 00:25:10.472720 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:10.472756 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:10.472770 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:10.472778 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:10.472784 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:10 GMT
	I0109 00:25:10.472976 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"427","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0109 00:25:10.473395 1747564 node_ready.go:49] node "multinode-979047" has status "Ready":"True"
	I0109 00:25:10.473415 1747564 node_ready.go:38] duration metric: took 2.003716318s waiting for node "multinode-979047" to be "Ready" ...
	I0109 00:25:10.473426 1747564 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:25:10.473492 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0109 00:25:10.473505 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:10.473513 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:10.473519 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:10.477387 1747564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:25:10.477410 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:10.477418 1747564 round_trippers.go:580]     Audit-Id: 3082e6b1-5182-4ed1-a352-46edf22d6383
	I0109 00:25:10.477425 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:10.477431 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:10.477437 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:10.477443 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:10.477450 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:10 GMT
	I0109 00:25:10.477794 1747564 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"coredns-5dd5756b68-shbhd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"46759197-0373-4f95-ba9c-8065624d0f27","resourceVersion":"432","creationTimestamp":"2024-01-09T00:25:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"eed4b8ed-33af-4fa3-b93c-c7a4f038d6a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"eed4b8ed-33af-4fa3-b93c-c7a4f038d6a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56434 chars]
	I0109 00:25:10.482030 1747564 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-shbhd" in "kube-system" namespace to be "Ready" ...
	I0109 00:25:10.482148 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-shbhd
	I0109 00:25:10.482154 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:10.482162 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:10.482171 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:10.484870 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:10.484893 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:10.484902 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:10.484909 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:10 GMT
	I0109 00:25:10.484915 1747564 round_trippers.go:580]     Audit-Id: 71a578e7-f45f-4790-807c-f220a5e9c799
	I0109 00:25:10.484921 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:10.484931 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:10.484937 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:10.485450 1747564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-shbhd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"46759197-0373-4f95-ba9c-8065624d0f27","resourceVersion":"432","creationTimestamp":"2024-01-09T00:25:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"eed4b8ed-33af-4fa3-b93c-c7a4f038d6a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"eed4b8ed-33af-4fa3-b93c-c7a4f038d6a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0109 00:25:10.485986 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:25:10.486001 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:10.486010 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:10.486017 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:10.488394 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:10.488423 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:10.488431 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:10.488437 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:10.488446 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:10.488455 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:10 GMT
	I0109 00:25:10.488466 1747564 round_trippers.go:580]     Audit-Id: 25127ced-3847-4459-aaf2-ce6ef67d01fe
	I0109 00:25:10.488472 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:10.488822 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"427","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0109 00:25:10.982315 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-shbhd
	I0109 00:25:10.982339 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:10.982348 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:10.982356 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:10.988799 1747564 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0109 00:25:10.988822 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:10.988831 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:10.988837 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:10.988843 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:10.988849 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:10 GMT
	I0109 00:25:10.988856 1747564 round_trippers.go:580]     Audit-Id: ed92e230-02a6-4aca-ba24-8a437f2f62de
	I0109 00:25:10.988862 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:10.989036 1747564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-shbhd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"46759197-0373-4f95-ba9c-8065624d0f27","resourceVersion":"432","creationTimestamp":"2024-01-09T00:25:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"eed4b8ed-33af-4fa3-b93c-c7a4f038d6a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"eed4b8ed-33af-4fa3-b93c-c7a4f038d6a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0109 00:25:10.989582 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:25:10.989617 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:10.989636 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:10.989644 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:11.007109 1747564 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0109 00:25:11.007180 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:11.007202 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:11.007223 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:11.007253 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:11 GMT
	I0109 00:25:11.007279 1747564 round_trippers.go:580]     Audit-Id: c9f17dec-d626-47ad-a745-68be14aab2f5
	I0109 00:25:11.007302 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:11.007334 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:11.007973 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"427","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0109 00:25:11.482684 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-shbhd
	I0109 00:25:11.482710 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:11.482721 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:11.482728 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:11.485285 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:11.485350 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:11.485373 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:11 GMT
	I0109 00:25:11.485395 1747564 round_trippers.go:580]     Audit-Id: 3c90650e-e230-40f6-9ea8-5544f43ef5f2
	I0109 00:25:11.485429 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:11.485456 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:11.485478 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:11.485499 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:11.485628 1747564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-shbhd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"46759197-0373-4f95-ba9c-8065624d0f27","resourceVersion":"432","creationTimestamp":"2024-01-09T00:25:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"eed4b8ed-33af-4fa3-b93c-c7a4f038d6a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"eed4b8ed-33af-4fa3-b93c-c7a4f038d6a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0109 00:25:11.486153 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:25:11.486171 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:11.486189 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:11.486199 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:11.488683 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:11.488744 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:11.488758 1747564 round_trippers.go:580]     Audit-Id: de945cca-0271-4a15-aba2-1a08de961b6e
	I0109 00:25:11.488766 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:11.488773 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:11.488779 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:11.488798 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:11.488810 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:11 GMT
	I0109 00:25:11.488967 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"427","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0109 00:25:11.982270 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-shbhd
	I0109 00:25:11.982307 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:11.982319 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:11.982330 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:11.984858 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:11.984879 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:11.984888 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:11.984894 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:11.984900 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:11.984907 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:11.984918 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:11 GMT
	I0109 00:25:11.984932 1747564 round_trippers.go:580]     Audit-Id: e09bb623-3d88-499d-8551-e409dbe09802
	I0109 00:25:11.985093 1747564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-shbhd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"46759197-0373-4f95-ba9c-8065624d0f27","resourceVersion":"443","creationTimestamp":"2024-01-09T00:25:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"eed4b8ed-33af-4fa3-b93c-c7a4f038d6a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"eed4b8ed-33af-4fa3-b93c-c7a4f038d6a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0109 00:25:11.985708 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:25:11.985727 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:11.985735 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:11.985743 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:11.987987 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:11.988008 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:11.988016 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:11.988022 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:11.988028 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:11.988035 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:11.988044 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:11 GMT
	I0109 00:25:11.988055 1747564 round_trippers.go:580]     Audit-Id: 82ef5794-6eb9-43ed-98c5-330287291dbd
	I0109 00:25:11.988239 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"427","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0109 00:25:11.988635 1747564 pod_ready.go:92] pod "coredns-5dd5756b68-shbhd" in "kube-system" namespace has status "Ready":"True"
	I0109 00:25:11.988655 1747564 pod_ready.go:81] duration metric: took 1.50659654s waiting for pod "coredns-5dd5756b68-shbhd" in "kube-system" namespace to be "Ready" ...
	I0109 00:25:11.988665 1747564 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-979047" in "kube-system" namespace to be "Ready" ...
	I0109 00:25:11.988725 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-979047
	I0109 00:25:11.988736 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:11.988745 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:11.988752 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:11.990918 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:11.990934 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:11.990941 1747564 round_trippers.go:580]     Audit-Id: ae03bf32-ab04-4d25-af13-8e9510fd9714
	I0109 00:25:11.990948 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:11.990954 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:11.990960 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:11.990969 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:11.990983 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:11 GMT
	I0109 00:25:11.991319 1747564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-979047","namespace":"kube-system","uid":"a5a13277-0ebc-493c-a6d6-f46ae712ddb9","resourceVersion":"332","creationTimestamp":"2024-01-09T00:24:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"0cec3581efd3ceb60c1df7924ae017cf","kubernetes.io/config.mirror":"0cec3581efd3ceb60c1df7924ae017cf","kubernetes.io/config.seen":"2024-01-09T00:24:54.513907754Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:24:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0109 00:25:11.991769 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:25:11.991786 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:11.991795 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:11.991802 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:11.993849 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:11.993867 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:11.993875 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:11 GMT
	I0109 00:25:11.993882 1747564 round_trippers.go:580]     Audit-Id: 07078896-ce72-460e-8b41-51511815a958
	I0109 00:25:11.993888 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:11.993899 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:11.993909 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:11.993916 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:11.994121 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"427","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0109 00:25:12.489173 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-979047
	I0109 00:25:12.489199 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:12.489210 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:12.489217 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:12.491768 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:12.491825 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:12.491846 1747564 round_trippers.go:580]     Audit-Id: 98516cb2-3e2e-4d08-9313-d2b013134e56
	I0109 00:25:12.491869 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:12.491903 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:12.491925 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:12.491948 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:12.491971 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:12 GMT
	I0109 00:25:12.492099 1747564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-979047","namespace":"kube-system","uid":"a5a13277-0ebc-493c-a6d6-f46ae712ddb9","resourceVersion":"332","creationTimestamp":"2024-01-09T00:24:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"0cec3581efd3ceb60c1df7924ae017cf","kubernetes.io/config.mirror":"0cec3581efd3ceb60c1df7924ae017cf","kubernetes.io/config.seen":"2024-01-09T00:24:54.513907754Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:24:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0109 00:25:12.492574 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:25:12.492587 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:12.492597 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:12.492608 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:12.494827 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:12.494903 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:12.494919 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:12.494927 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:12 GMT
	I0109 00:25:12.494941 1747564 round_trippers.go:580]     Audit-Id: f6c281a5-75d2-4880-93bf-6ea67e9b6573
	I0109 00:25:12.494948 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:12.494954 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:12.494972 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:12.495081 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"427","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0109 00:25:12.989197 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-979047
	I0109 00:25:12.989259 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:12.989275 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:12.989284 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:12.991802 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:12.991856 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:12.991894 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:12.991919 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:12 GMT
	I0109 00:25:12.991940 1747564 round_trippers.go:580]     Audit-Id: 9e1775cf-3339-40a9-840c-61f30005a742
	I0109 00:25:12.991975 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:12.991999 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:12.992020 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:12.992150 1747564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-979047","namespace":"kube-system","uid":"a5a13277-0ebc-493c-a6d6-f46ae712ddb9","resourceVersion":"332","creationTimestamp":"2024-01-09T00:24:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"0cec3581efd3ceb60c1df7924ae017cf","kubernetes.io/config.mirror":"0cec3581efd3ceb60c1df7924ae017cf","kubernetes.io/config.seen":"2024-01-09T00:24:54.513907754Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:24:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0109 00:25:12.992635 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:25:12.992651 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:12.992659 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:12.992666 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:12.994878 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:12.994896 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:12.994903 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:12.994912 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:12 GMT
	I0109 00:25:12.994918 1747564 round_trippers.go:580]     Audit-Id: 94c6276d-67c3-47de-a8e2-1f69bb9b11a9
	I0109 00:25:12.994924 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:12.994933 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:12.994948 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:12.995124 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"427","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0109 00:25:13.489045 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-979047
	I0109 00:25:13.489075 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:13.489085 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:13.489093 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:13.491676 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:13.491698 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:13.491708 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:13.491715 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:13.491721 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:13 GMT
	I0109 00:25:13.491727 1747564 round_trippers.go:580]     Audit-Id: ebf3131a-c2bc-4604-b255-69400322d76b
	I0109 00:25:13.491740 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:13.491748 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:13.492121 1747564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-979047","namespace":"kube-system","uid":"a5a13277-0ebc-493c-a6d6-f46ae712ddb9","resourceVersion":"332","creationTimestamp":"2024-01-09T00:24:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"0cec3581efd3ceb60c1df7924ae017cf","kubernetes.io/config.mirror":"0cec3581efd3ceb60c1df7924ae017cf","kubernetes.io/config.seen":"2024-01-09T00:24:54.513907754Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:24:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0109 00:25:13.492593 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:25:13.492622 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:13.492637 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:13.492648 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:13.494865 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:13.494921 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:13.494942 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:13.494962 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:13.494998 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:13 GMT
	I0109 00:25:13.495034 1747564 round_trippers.go:580]     Audit-Id: 2eb4e9dc-6b93-4f02-862b-0d281d6446fc
	I0109 00:25:13.495055 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:13.495075 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:13.495199 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"427","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0109 00:25:13.989236 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-979047
	I0109 00:25:13.989261 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:13.989270 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:13.989285 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:13.991810 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:13.991836 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:13.991845 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:13.991852 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:13 GMT
	I0109 00:25:13.991858 1747564 round_trippers.go:580]     Audit-Id: 7de0b56a-9290-466c-946f-da57be2bf781
	I0109 00:25:13.991865 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:13.991876 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:13.991882 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:13.992074 1747564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-979047","namespace":"kube-system","uid":"a5a13277-0ebc-493c-a6d6-f46ae712ddb9","resourceVersion":"332","creationTimestamp":"2024-01-09T00:24:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"0cec3581efd3ceb60c1df7924ae017cf","kubernetes.io/config.mirror":"0cec3581efd3ceb60c1df7924ae017cf","kubernetes.io/config.seen":"2024-01-09T00:24:54.513907754Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:24:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0109 00:25:13.992540 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:25:13.992555 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:13.992563 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:13.992570 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:13.994816 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:13.994834 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:13.994842 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:13.994849 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:13.994855 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:13.994866 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:13.994873 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:13 GMT
	I0109 00:25:13.994879 1747564 round_trippers.go:580]     Audit-Id: 1c550234-8103-4c95-96cd-dd502141cc7c
	I0109 00:25:13.995301 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"427","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0109 00:25:13.995700 1747564 pod_ready.go:102] pod "etcd-multinode-979047" in "kube-system" namespace has status "Ready":"False"
	I0109 00:25:14.488899 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-979047
	I0109 00:25:14.488921 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:14.488932 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:14.488939 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:14.491503 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:14.491555 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:14.491577 1747564 round_trippers.go:580]     Audit-Id: 9d90648f-afed-4bde-a4cd-62386ec5967c
	I0109 00:25:14.491585 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:14.491591 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:14.491600 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:14.491606 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:14.491628 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:14 GMT
	I0109 00:25:14.491743 1747564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-979047","namespace":"kube-system","uid":"a5a13277-0ebc-493c-a6d6-f46ae712ddb9","resourceVersion":"332","creationTimestamp":"2024-01-09T00:24:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"0cec3581efd3ceb60c1df7924ae017cf","kubernetes.io/config.mirror":"0cec3581efd3ceb60c1df7924ae017cf","kubernetes.io/config.seen":"2024-01-09T00:24:54.513907754Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:24:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0109 00:25:14.492228 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:25:14.492245 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:14.492253 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:14.492260 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:14.494458 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:14.494477 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:14.494485 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:14.494491 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:14.494498 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:14.494504 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:14.494513 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:14 GMT
	I0109 00:25:14.494524 1747564 round_trippers.go:580]     Audit-Id: 62720ee6-9ed4-4cbf-bcdd-3bd778edaead
	I0109 00:25:14.494730 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"427","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0109 00:25:14.989777 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-979047
	I0109 00:25:14.989802 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:14.989811 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:14.989818 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:14.992376 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:14.992445 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:14.992467 1747564 round_trippers.go:580]     Audit-Id: 00db7e11-ef08-438d-94a2-13f64b2bd3cb
	I0109 00:25:14.992490 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:14.992522 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:14.992548 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:14.992570 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:14.992591 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:14 GMT
	I0109 00:25:14.992793 1747564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-979047","namespace":"kube-system","uid":"a5a13277-0ebc-493c-a6d6-f46ae712ddb9","resourceVersion":"453","creationTimestamp":"2024-01-09T00:24:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"0cec3581efd3ceb60c1df7924ae017cf","kubernetes.io/config.mirror":"0cec3581efd3ceb60c1df7924ae017cf","kubernetes.io/config.seen":"2024-01-09T00:24:54.513907754Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:24:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0109 00:25:14.993281 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:25:14.993295 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:14.993304 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:14.993310 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:14.995639 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:14.995660 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:14.995668 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:14.995674 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:14.995681 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:14.995688 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:14 GMT
	I0109 00:25:14.995696 1747564 round_trippers.go:580]     Audit-Id: f3c541d8-3e9f-4b28-951c-27e262b35870
	I0109 00:25:14.995704 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:14.995958 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"427","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0109 00:25:14.996352 1747564 pod_ready.go:92] pod "etcd-multinode-979047" in "kube-system" namespace has status "Ready":"True"
	I0109 00:25:14.996371 1747564 pod_ready.go:81] duration metric: took 3.007694019s waiting for pod "etcd-multinode-979047" in "kube-system" namespace to be "Ready" ...
	I0109 00:25:14.996385 1747564 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-979047" in "kube-system" namespace to be "Ready" ...
	I0109 00:25:14.996449 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-979047
	I0109 00:25:14.996458 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:14.996466 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:14.996474 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:14.998923 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:14.998943 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:14.998951 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:14 GMT
	I0109 00:25:14.998958 1747564 round_trippers.go:580]     Audit-Id: 9568f72e-7012-4a24-8834-ad5f7577c0c9
	I0109 00:25:14.998964 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:14.998970 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:14.998976 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:14.998982 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:14.999255 1747564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-979047","namespace":"kube-system","uid":"38619ccd-6ea3-42d0-8b26-b59a1af5875d","resourceVersion":"452","creationTimestamp":"2024-01-09T00:24:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"efcc78c31982772633c5559a7765d574","kubernetes.io/config.mirror":"efcc78c31982772633c5559a7765d574","kubernetes.io/config.seen":"2024-01-09T00:24:54.513911972Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:24:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0109 00:25:14.999798 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:25:14.999812 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:14.999819 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:14.999827 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:15.002515 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:15.002595 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:15.002620 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:15 GMT
	I0109 00:25:15.002644 1747564 round_trippers.go:580]     Audit-Id: 8d0a96aa-1370-459d-b69c-2087b3bd9388
	I0109 00:25:15.002676 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:15.002703 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:15.002717 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:15.002724 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:15.002910 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"427","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0109 00:25:15.003372 1747564 pod_ready.go:92] pod "kube-apiserver-multinode-979047" in "kube-system" namespace has status "Ready":"True"
	I0109 00:25:15.003394 1747564 pod_ready.go:81] duration metric: took 6.996459ms waiting for pod "kube-apiserver-multinode-979047" in "kube-system" namespace to be "Ready" ...
	I0109 00:25:15.003409 1747564 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-979047" in "kube-system" namespace to be "Ready" ...
	I0109 00:25:15.003486 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-979047
	I0109 00:25:15.003499 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:15.003508 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:15.003515 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:15.006480 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:15.006511 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:15.006520 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:15.006559 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:15 GMT
	I0109 00:25:15.006571 1747564 round_trippers.go:580]     Audit-Id: eb00740c-a4a3-4ff5-9186-767c319103a0
	I0109 00:25:15.006578 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:15.006592 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:15.006612 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:15.006801 1747564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-979047","namespace":"kube-system","uid":"cd5437df-a3ac-4591-8cce-765486ff6afb","resourceVersion":"454","creationTimestamp":"2024-01-09T00:24:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4ec0a5968f98276fe45449a372f72485","kubernetes.io/config.mirror":"4ec0a5968f98276fe45449a372f72485","kubernetes.io/config.seen":"2024-01-09T00:24:47.067515085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:24:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0109 00:25:15.007378 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:25:15.007397 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:15.007407 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:15.007414 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:15.010064 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:15.010135 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:15.010173 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:15.010197 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:15.010218 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:15 GMT
	I0109 00:25:15.010255 1747564 round_trippers.go:580]     Audit-Id: 8c2fa5bc-ec41-4d57-a28a-5bb8b4bcd0ca
	I0109 00:25:15.010282 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:15.010304 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:15.010499 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"427","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0109 00:25:15.010974 1747564 pod_ready.go:92] pod "kube-controller-manager-multinode-979047" in "kube-system" namespace has status "Ready":"True"
	I0109 00:25:15.010995 1747564 pod_ready.go:81] duration metric: took 7.5782ms waiting for pod "kube-controller-manager-multinode-979047" in "kube-system" namespace to be "Ready" ...
	I0109 00:25:15.011019 1747564 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r5w9b" in "kube-system" namespace to be "Ready" ...
	I0109 00:25:15.011116 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r5w9b
	I0109 00:25:15.011126 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:15.011135 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:15.011142 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:15.013768 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:15.013793 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:15.013802 1747564 round_trippers.go:580]     Audit-Id: 0b82352f-2a8e-499f-ade5-678cc56bddea
	I0109 00:25:15.013808 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:15.013815 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:15.013821 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:15.013828 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:15.013835 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:15 GMT
	I0109 00:25:15.014258 1747564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-r5w9b","generateName":"kube-proxy-","namespace":"kube-system","uid":"0b49bb1e-f3f4-4760-bb78-97d8bc5ae4e6","resourceVersion":"423","creationTimestamp":"2024-01-09T00:25:07Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"920e93f8-1b6d-4b70-a3ad-394be18be16a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"920e93f8-1b6d-4b70-a3ad-394be18be16a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0109 00:25:15.014806 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:25:15.014828 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:15.014837 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:15.014845 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:15.017891 1747564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:25:15.017969 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:15.018005 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:15.018026 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:15.018069 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:15 GMT
	I0109 00:25:15.018086 1747564 round_trippers.go:580]     Audit-Id: cbbeb882-8066-408e-b1ef-8b4cd128cb9b
	I0109 00:25:15.018095 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:15.018102 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:15.018268 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"427","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0109 00:25:15.018779 1747564 pod_ready.go:92] pod "kube-proxy-r5w9b" in "kube-system" namespace has status "Ready":"True"
	I0109 00:25:15.018813 1747564 pod_ready.go:81] duration metric: took 7.779416ms waiting for pod "kube-proxy-r5w9b" in "kube-system" namespace to be "Ready" ...
	I0109 00:25:15.018830 1747564 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-979047" in "kube-system" namespace to be "Ready" ...
	I0109 00:25:15.018922 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-979047
	I0109 00:25:15.018934 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:15.018943 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:15.018950 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:15.021613 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:15.021649 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:15.021660 1747564 round_trippers.go:580]     Audit-Id: 95ea772f-d4b3-4995-be0b-c8f84362a353
	I0109 00:25:15.021667 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:15.021677 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:15.021685 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:15.021691 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:15.021698 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:15 GMT
	I0109 00:25:15.022133 1747564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-979047","namespace":"kube-system","uid":"332540fe-3c27-468f-9108-453e0086f012","resourceVersion":"451","creationTimestamp":"2024-01-09T00:24:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b7338a293703ada6fed293fc7aaddf4d","kubernetes.io/config.mirror":"b7338a293703ada6fed293fc7aaddf4d","kubernetes.io/config.seen":"2024-01-09T00:24:54.513914901Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:24:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0109 00:25:15.022621 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:25:15.022642 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:15.022653 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:15.022661 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:15.025306 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:15.025347 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:15.025357 1747564 round_trippers.go:580]     Audit-Id: 55091aa7-eb00-4494-95be-3e3b91ab494c
	I0109 00:25:15.025364 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:15.025372 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:15.025378 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:15.025391 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:15.025399 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:15 GMT
	I0109 00:25:15.025816 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"427","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0109 00:25:15.026234 1747564 pod_ready.go:92] pod "kube-scheduler-multinode-979047" in "kube-system" namespace has status "Ready":"True"
	I0109 00:25:15.026253 1747564 pod_ready.go:81] duration metric: took 7.405727ms waiting for pod "kube-scheduler-multinode-979047" in "kube-system" namespace to be "Ready" ...
	I0109 00:25:15.026267 1747564 pod_ready.go:38] duration metric: took 4.552829499s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:25:15.026287 1747564 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:25:15.026354 1747564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:25:15.039291 1747564 command_runner.go:130] > 1245
	I0109 00:25:15.041123 1747564 api_server.go:72] duration metric: took 7.099976089s to wait for apiserver process to appear ...
	I0109 00:25:15.041153 1747564 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:25:15.041174 1747564 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0109 00:25:15.051215 1747564 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0109 00:25:15.051312 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0109 00:25:15.051324 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:15.051334 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:15.051341 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:15.052712 1747564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0109 00:25:15.052736 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:15.052745 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:15.052757 1747564 round_trippers.go:580]     Content-Length: 264
	I0109 00:25:15.052764 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:15 GMT
	I0109 00:25:15.052770 1747564 round_trippers.go:580]     Audit-Id: 24c74eb8-f900-4d40-820f-0b0b813b97ad
	I0109 00:25:15.052777 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:15.052784 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:15.052790 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:15.052815 1747564 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I0109 00:25:15.052913 1747564 api_server.go:141] control plane version: v1.28.4
	I0109 00:25:15.052932 1747564 api_server.go:131] duration metric: took 11.772469ms to wait for apiserver health ...
	I0109 00:25:15.052943 1747564 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:25:15.190302 1747564 request.go:629] Waited for 137.266118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0109 00:25:15.190399 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0109 00:25:15.190412 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:15.190422 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:15.190458 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:15.193895 1747564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:25:15.194019 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:15.194231 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:15.194253 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:15.194260 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:15.194273 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:15 GMT
	I0109 00:25:15.194284 1747564 round_trippers.go:580]     Audit-Id: 63ccc28b-0c9c-4ae9-9646-1894563b9999
	I0109 00:25:15.194291 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:15.194715 1747564 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"456"},"items":[{"metadata":{"name":"coredns-5dd5756b68-shbhd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"46759197-0373-4f95-ba9c-8065624d0f27","resourceVersion":"443","creationTimestamp":"2024-01-09T00:25:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"eed4b8ed-33af-4fa3-b93c-c7a4f038d6a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"eed4b8ed-33af-4fa3-b93c-c7a4f038d6a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0109 00:25:15.197139 1747564 system_pods.go:59] 8 kube-system pods found
	I0109 00:25:15.197173 1747564 system_pods.go:61] "coredns-5dd5756b68-shbhd" [46759197-0373-4f95-ba9c-8065624d0f27] Running
	I0109 00:25:15.197179 1747564 system_pods.go:61] "etcd-multinode-979047" [a5a13277-0ebc-493c-a6d6-f46ae712ddb9] Running
	I0109 00:25:15.197185 1747564 system_pods.go:61] "kindnet-b4fpt" [11e40151-521e-4937-90fb-feb0d88d49ce] Running
	I0109 00:25:15.197193 1747564 system_pods.go:61] "kube-apiserver-multinode-979047" [38619ccd-6ea3-42d0-8b26-b59a1af5875d] Running
	I0109 00:25:15.197204 1747564 system_pods.go:61] "kube-controller-manager-multinode-979047" [cd5437df-a3ac-4591-8cce-765486ff6afb] Running
	I0109 00:25:15.197213 1747564 system_pods.go:61] "kube-proxy-r5w9b" [0b49bb1e-f3f4-4760-bb78-97d8bc5ae4e6] Running
	I0109 00:25:15.197219 1747564 system_pods.go:61] "kube-scheduler-multinode-979047" [332540fe-3c27-468f-9108-453e0086f012] Running
	I0109 00:25:15.197227 1747564 system_pods.go:61] "storage-provisioner" [b69dd807-2575-40fe-87fc-53a9e39c9b2d] Running
	I0109 00:25:15.197233 1747564 system_pods.go:74] duration metric: took 144.27358ms to wait for pod list to return data ...
	I0109 00:25:15.197242 1747564 default_sa.go:34] waiting for default service account to be created ...
	I0109 00:25:15.390679 1747564 request.go:629] Waited for 193.340228ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0109 00:25:15.390745 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0109 00:25:15.390755 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:15.390764 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:15.390775 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:15.393457 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:15.393478 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:15.393486 1747564 round_trippers.go:580]     Content-Length: 261
	I0109 00:25:15.393493 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:15 GMT
	I0109 00:25:15.393499 1747564 round_trippers.go:580]     Audit-Id: dcb44e68-cfea-4fb9-a9ea-697db16fe277
	I0109 00:25:15.393505 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:15.393512 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:15.393521 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:15.393530 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:15.393548 1747564 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"456"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"65cac51f-8cde-4521-b8d1-e9682a79d90c","resourceVersion":"355","creationTimestamp":"2024-01-09T00:25:07Z"}}]}
	I0109 00:25:15.393752 1747564 default_sa.go:45] found service account: "default"
	I0109 00:25:15.393773 1747564 default_sa.go:55] duration metric: took 196.520991ms for default service account to be created ...
	I0109 00:25:15.393791 1747564 system_pods.go:116] waiting for k8s-apps to be running ...
	I0109 00:25:15.590199 1747564 request.go:629] Waited for 196.345547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0109 00:25:15.590260 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0109 00:25:15.590266 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:15.590275 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:15.590288 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:15.593842 1747564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:25:15.593877 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:15.593886 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:15 GMT
	I0109 00:25:15.593893 1747564 round_trippers.go:580]     Audit-Id: 320c3a84-6723-4d15-b97a-ca4bad164e07
	I0109 00:25:15.593900 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:15.593906 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:15.593913 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:15.593919 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:15.594873 1747564 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"456"},"items":[{"metadata":{"name":"coredns-5dd5756b68-shbhd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"46759197-0373-4f95-ba9c-8065624d0f27","resourceVersion":"443","creationTimestamp":"2024-01-09T00:25:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"eed4b8ed-33af-4fa3-b93c-c7a4f038d6a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"eed4b8ed-33af-4fa3-b93c-c7a4f038d6a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0109 00:25:15.597290 1747564 system_pods.go:86] 8 kube-system pods found
	I0109 00:25:15.597319 1747564 system_pods.go:89] "coredns-5dd5756b68-shbhd" [46759197-0373-4f95-ba9c-8065624d0f27] Running
	I0109 00:25:15.597327 1747564 system_pods.go:89] "etcd-multinode-979047" [a5a13277-0ebc-493c-a6d6-f46ae712ddb9] Running
	I0109 00:25:15.597332 1747564 system_pods.go:89] "kindnet-b4fpt" [11e40151-521e-4937-90fb-feb0d88d49ce] Running
	I0109 00:25:15.597337 1747564 system_pods.go:89] "kube-apiserver-multinode-979047" [38619ccd-6ea3-42d0-8b26-b59a1af5875d] Running
	I0109 00:25:15.597342 1747564 system_pods.go:89] "kube-controller-manager-multinode-979047" [cd5437df-a3ac-4591-8cce-765486ff6afb] Running
	I0109 00:25:15.597354 1747564 system_pods.go:89] "kube-proxy-r5w9b" [0b49bb1e-f3f4-4760-bb78-97d8bc5ae4e6] Running
	I0109 00:25:15.597362 1747564 system_pods.go:89] "kube-scheduler-multinode-979047" [332540fe-3c27-468f-9108-453e0086f012] Running
	I0109 00:25:15.597367 1747564 system_pods.go:89] "storage-provisioner" [b69dd807-2575-40fe-87fc-53a9e39c9b2d] Running
	I0109 00:25:15.597376 1747564 system_pods.go:126] duration metric: took 203.580219ms to wait for k8s-apps to be running ...
	I0109 00:25:15.597387 1747564 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:25:15.597448 1747564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:25:15.611054 1747564 system_svc.go:56] duration metric: took 13.657794ms WaitForService to wait for kubelet.
	I0109 00:25:15.611086 1747564 kubeadm.go:581] duration metric: took 7.669944697s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:25:15.611136 1747564 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:25:15.790532 1747564 request.go:629] Waited for 179.322283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0109 00:25:15.790589 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0109 00:25:15.790596 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:15.790605 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:15.790616 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:15.793187 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:15.793211 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:15.793220 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:15 GMT
	I0109 00:25:15.793233 1747564 round_trippers.go:580]     Audit-Id: 44203d0e-68e1-40b6-a204-dd1621694a1f
	I0109 00:25:15.793240 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:15.793246 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:15.793253 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:15.793263 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:15.793398 1747564 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"456"},"items":[{"metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"427","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I0109 00:25:15.793855 1747564 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0109 00:25:15.793881 1747564 node_conditions.go:123] node cpu capacity is 2
	I0109 00:25:15.793892 1747564 node_conditions.go:105] duration metric: took 182.747085ms to run NodePressure ...
	I0109 00:25:15.793904 1747564 start.go:228] waiting for startup goroutines ...
	I0109 00:25:15.793914 1747564 start.go:233] waiting for cluster config update ...
	I0109 00:25:15.793924 1747564 start.go:242] writing updated cluster config ...
	I0109 00:25:15.796751 1747564 out.go:177] 
	I0109 00:25:15.798924 1747564 config.go:182] Loaded profile config "multinode-979047": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:25:15.799051 1747564 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/config.json ...
	I0109 00:25:15.801417 1747564 out.go:177] * Starting worker node multinode-979047-m02 in cluster multinode-979047
	I0109 00:25:15.803810 1747564 cache.go:121] Beginning downloading kic base image for docker with crio
	I0109 00:25:15.805852 1747564 out.go:177] * Pulling base image v0.0.42-1704751654-17830 ...
	I0109 00:25:15.807861 1747564 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:25:15.807886 1747564 cache.go:56] Caching tarball of preloaded images
	I0109 00:25:15.807934 1747564 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon
	I0109 00:25:15.807991 1747564 preload.go:174] Found /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0109 00:25:15.808005 1747564 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0109 00:25:15.808099 1747564 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/config.json ...
	I0109 00:25:15.825405 1747564 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon, skipping pull
	I0109 00:25:15.825431 1747564 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 exists in daemon, skipping load
	I0109 00:25:15.825454 1747564 cache.go:194] Successfully downloaded all kic artifacts
	I0109 00:25:15.825490 1747564 start.go:365] acquiring machines lock for multinode-979047-m02: {Name:mkf58e2bd9ec0598265c1d83b1c2e0a6354fef3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:25:15.825617 1747564 start.go:369] acquired machines lock for "multinode-979047-m02" in 109.851µs
	I0109 00:25:15.825644 1747564 start.go:93] Provisioning new machine with config: &{Name:multinode-979047 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-979047 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0109 00:25:15.825730 1747564 start.go:125] createHost starting for "m02" (driver="docker")
	I0109 00:25:15.828501 1747564 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0109 00:25:15.828625 1747564 start.go:159] libmachine.API.Create for "multinode-979047" (driver="docker")
	I0109 00:25:15.828650 1747564 client.go:168] LocalClient.Create starting
	I0109 00:25:15.828717 1747564 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem
	I0109 00:25:15.828754 1747564 main.go:141] libmachine: Decoding PEM data...
	I0109 00:25:15.828772 1747564 main.go:141] libmachine: Parsing certificate...
	I0109 00:25:15.828834 1747564 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem
	I0109 00:25:15.828856 1747564 main.go:141] libmachine: Decoding PEM data...
	I0109 00:25:15.828878 1747564 main.go:141] libmachine: Parsing certificate...
	I0109 00:25:15.829119 1747564 cli_runner.go:164] Run: docker network inspect multinode-979047 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0109 00:25:15.846471 1747564 network_create.go:77] Found existing network {name:multinode-979047 subnet:0x4002c6c360 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0109 00:25:15.846526 1747564 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-979047-m02" container
	I0109 00:25:15.846595 1747564 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0109 00:25:15.863864 1747564 cli_runner.go:164] Run: docker volume create multinode-979047-m02 --label name.minikube.sigs.k8s.io=multinode-979047-m02 --label created_by.minikube.sigs.k8s.io=true
	I0109 00:25:15.882387 1747564 oci.go:103] Successfully created a docker volume multinode-979047-m02
	I0109 00:25:15.882536 1747564 cli_runner.go:164] Run: docker run --rm --name multinode-979047-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-979047-m02 --entrypoint /usr/bin/test -v multinode-979047-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -d /var/lib
	I0109 00:25:16.435974 1747564 oci.go:107] Successfully prepared a docker volume multinode-979047-m02
	I0109 00:25:16.436022 1747564 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:25:16.436043 1747564 kic.go:194] Starting extracting preloaded images to volume ...
	I0109 00:25:16.436129 1747564 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-979047-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -I lz4 -xf /preloaded.tar -C /extractDir
	I0109 00:25:20.665716 1747564 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-979047-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -I lz4 -xf /preloaded.tar -C /extractDir: (4.22952381s)
	I0109 00:25:20.665752 1747564 kic.go:203] duration metric: took 4.229707 seconds to extract preloaded images to volume
	W0109 00:25:20.665905 1747564 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0109 00:25:20.666020 1747564 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0109 00:25:20.739673 1747564 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-979047-m02 --name multinode-979047-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-979047-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-979047-m02 --network multinode-979047 --ip 192.168.58.3 --volume multinode-979047-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617
	I0109 00:25:21.124276 1747564 cli_runner.go:164] Run: docker container inspect multinode-979047-m02 --format={{.State.Running}}
	I0109 00:25:21.147694 1747564 cli_runner.go:164] Run: docker container inspect multinode-979047-m02 --format={{.State.Status}}
	I0109 00:25:21.178214 1747564 cli_runner.go:164] Run: docker exec multinode-979047-m02 stat /var/lib/dpkg/alternatives/iptables
	I0109 00:25:21.259528 1747564 oci.go:144] the created container "multinode-979047-m02" has a running status.
	I0109 00:25:21.259555 1747564 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/multinode-979047-m02/id_rsa...
	I0109 00:25:21.538935 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/multinode-979047-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0109 00:25:21.539033 1747564 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/multinode-979047-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0109 00:25:21.567987 1747564 cli_runner.go:164] Run: docker container inspect multinode-979047-m02 --format={{.State.Status}}
	I0109 00:25:21.602805 1747564 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0109 00:25:21.602826 1747564 kic_runner.go:114] Args: [docker exec --privileged multinode-979047-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0109 00:25:21.694232 1747564 cli_runner.go:164] Run: docker container inspect multinode-979047-m02 --format={{.State.Status}}
	I0109 00:25:21.718365 1747564 machine.go:88] provisioning docker machine ...
	I0109 00:25:21.721499 1747564 ubuntu.go:169] provisioning hostname "multinode-979047-m02"
	I0109 00:25:21.721575 1747564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-979047-m02
	I0109 00:25:21.756576 1747564 main.go:141] libmachine: Using SSH client type: native
	I0109 00:25:21.756981 1747564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34449 <nil> <nil>}
	I0109 00:25:21.756993 1747564 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-979047-m02 && echo "multinode-979047-m02" | sudo tee /etc/hostname
	I0109 00:25:21.757674 1747564 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48396->127.0.0.1:34449: read: connection reset by peer
	I0109 00:25:24.932649 1747564 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-979047-m02
	
	I0109 00:25:24.932736 1747564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-979047-m02
	I0109 00:25:24.951483 1747564 main.go:141] libmachine: Using SSH client type: native
	I0109 00:25:24.951881 1747564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34449 <nil> <nil>}
	I0109 00:25:24.951906 1747564 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-979047-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-979047-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-979047-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:25:25.104488 1747564 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:25:25.104520 1747564 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17830-1678586/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-1678586/.minikube}
	I0109 00:25:25.104536 1747564 ubuntu.go:177] setting up certificates
	I0109 00:25:25.104580 1747564 provision.go:83] configureAuth start
	I0109 00:25:25.104665 1747564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-979047-m02
	I0109 00:25:25.123700 1747564 provision.go:138] copyHostCerts
	I0109 00:25:25.123748 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem
	I0109 00:25:25.123784 1747564 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem, removing ...
	I0109 00:25:25.123791 1747564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem
	I0109 00:25:25.123868 1747564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem (1082 bytes)
	I0109 00:25:25.123952 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem
	I0109 00:25:25.123969 1747564 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem, removing ...
	I0109 00:25:25.123974 1747564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem
	I0109 00:25:25.124000 1747564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem (1123 bytes)
	I0109 00:25:25.124047 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem
	I0109 00:25:25.124062 1747564 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem, removing ...
	I0109 00:25:25.124066 1747564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem
	I0109 00:25:25.124090 1747564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem (1679 bytes)
	I0109 00:25:25.124144 1747564 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca-key.pem org=jenkins.multinode-979047-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-979047-m02]
	I0109 00:25:25.977701 1747564 provision.go:172] copyRemoteCerts
	I0109 00:25:25.977771 1747564 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:25:25.977812 1747564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-979047-m02
	I0109 00:25:26.001584 1747564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34449 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/multinode-979047-m02/id_rsa Username:docker}
	I0109 00:25:26.105081 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0109 00:25:26.105146 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:25:26.134415 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0109 00:25:26.134500 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0109 00:25:26.163709 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0109 00:25:26.163774 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0109 00:25:26.192596 1747564 provision.go:86] duration metric: configureAuth took 1.087995195s
	I0109 00:25:26.192623 1747564 ubuntu.go:193] setting minikube options for container-runtime
	I0109 00:25:26.192822 1747564 config.go:182] Loaded profile config "multinode-979047": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:25:26.192921 1747564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-979047-m02
	I0109 00:25:26.212409 1747564 main.go:141] libmachine: Using SSH client type: native
	I0109 00:25:26.212824 1747564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34449 <nil> <nil>}
	I0109 00:25:26.212839 1747564 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:25:26.480730 1747564 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:25:26.480756 1747564 machine.go:91] provisioned docker machine in 4.75928105s
	I0109 00:25:26.480774 1747564 client.go:171] LocalClient.Create took 10.652107653s
	I0109 00:25:26.480793 1747564 start.go:167] duration metric: libmachine.API.Create for "multinode-979047" took 10.652168545s
	I0109 00:25:26.480804 1747564 start.go:300] post-start starting for "multinode-979047-m02" (driver="docker")
	I0109 00:25:26.480815 1747564 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:25:26.480896 1747564 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:25:26.480942 1747564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-979047-m02
	I0109 00:25:26.500430 1747564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34449 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/multinode-979047-m02/id_rsa Username:docker}
	I0109 00:25:26.609359 1747564 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:25:26.613328 1747564 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0109 00:25:26.613347 1747564 command_runner.go:130] > NAME="Ubuntu"
	I0109 00:25:26.613354 1747564 command_runner.go:130] > VERSION_ID="22.04"
	I0109 00:25:26.613361 1747564 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0109 00:25:26.613366 1747564 command_runner.go:130] > VERSION_CODENAME=jammy
	I0109 00:25:26.613371 1747564 command_runner.go:130] > ID=ubuntu
	I0109 00:25:26.613376 1747564 command_runner.go:130] > ID_LIKE=debian
	I0109 00:25:26.613382 1747564 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0109 00:25:26.613388 1747564 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0109 00:25:26.613395 1747564 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0109 00:25:26.613403 1747564 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0109 00:25:26.613409 1747564 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0109 00:25:26.613453 1747564 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0109 00:25:26.613475 1747564 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0109 00:25:26.613485 1747564 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0109 00:25:26.613492 1747564 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0109 00:25:26.613502 1747564 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-1678586/.minikube/addons for local assets ...
	I0109 00:25:26.613556 1747564 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-1678586/.minikube/files for local assets ...
	I0109 00:25:26.613628 1747564 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem -> 16839672.pem in /etc/ssl/certs
	I0109 00:25:26.613635 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem -> /etc/ssl/certs/16839672.pem
	I0109 00:25:26.613733 1747564 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:25:26.624290 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem --> /etc/ssl/certs/16839672.pem (1708 bytes)
	I0109 00:25:26.653576 1747564 start.go:303] post-start completed in 172.756654ms
	I0109 00:25:26.653937 1747564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-979047-m02
	I0109 00:25:26.672257 1747564 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/config.json ...
	I0109 00:25:26.672525 1747564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0109 00:25:26.672573 1747564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-979047-m02
	I0109 00:25:26.690360 1747564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34449 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/multinode-979047-m02/id_rsa Username:docker}
	I0109 00:25:26.792040 1747564 command_runner.go:130] > 15%!
	(MISSING)I0109 00:25:26.792527 1747564 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0109 00:25:26.797822 1747564 command_runner.go:130] > 167G
	I0109 00:25:26.798225 1747564 start.go:128] duration metric: createHost completed in 10.972480769s
	I0109 00:25:26.798244 1747564 start.go:83] releasing machines lock for "multinode-979047-m02", held for 10.972617402s
	I0109 00:25:26.798323 1747564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-979047-m02
	I0109 00:25:26.817856 1747564 out.go:177] * Found network options:
	I0109 00:25:26.820085 1747564 out.go:177]   - NO_PROXY=192.168.58.2
	W0109 00:25:26.822476 1747564 proxy.go:119] fail to check proxy env: Error ip not in block
	W0109 00:25:26.822516 1747564 proxy.go:119] fail to check proxy env: Error ip not in block
	I0109 00:25:26.822589 1747564 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:25:26.822636 1747564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-979047-m02
	I0109 00:25:26.822898 1747564 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:25:26.822959 1747564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-979047-m02
	I0109 00:25:26.843021 1747564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34449 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/multinode-979047-m02/id_rsa Username:docker}
	I0109 00:25:26.854093 1747564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34449 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/multinode-979047-m02/id_rsa Username:docker}
	I0109 00:25:27.115458 1747564 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0109 00:25:27.115562 1747564 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0109 00:25:27.121227 1747564 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0109 00:25:27.121251 1747564 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0109 00:25:27.121259 1747564 command_runner.go:130] > Device: b3h/179d	Inode: 2083141     Links: 1
	I0109 00:25:27.121267 1747564 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0109 00:25:27.121280 1747564 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0109 00:25:27.121286 1747564 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0109 00:25:27.121292 1747564 command_runner.go:130] > Change: 2024-01-09 00:01:33.738751998 +0000
	I0109 00:25:27.121298 1747564 command_runner.go:130] >  Birth: 2024-01-09 00:01:33.738751998 +0000
	I0109 00:25:27.121615 1747564 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:25:27.146920 1747564 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0109 00:25:27.147039 1747564 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:25:27.188154 1747564 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0109 00:25:27.188187 1747564 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0109 00:25:27.188196 1747564 start.go:475] detecting cgroup driver to use...
	I0109 00:25:27.188227 1747564 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0109 00:25:27.188282 1747564 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:25:27.208377 1747564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:25:27.221862 1747564 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:25:27.221963 1747564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:25:27.238619 1747564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:25:27.255073 1747564 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0109 00:25:27.357506 1747564 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:25:27.463856 1747564 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0109 00:25:27.463940 1747564 docker.go:219] disabling docker service ...
	I0109 00:25:27.464024 1747564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:25:27.486647 1747564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:25:27.500965 1747564 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:25:27.601999 1747564 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0109 00:25:27.602133 1747564 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:25:27.718032 1747564 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0109 00:25:27.718156 1747564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:25:27.731652 1747564 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:25:27.749545 1747564 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0109 00:25:27.751058 1747564 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0109 00:25:27.751124 1747564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:25:27.762800 1747564 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0109 00:25:27.762884 1747564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:25:27.774568 1747564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:25:27.785982 1747564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:25:27.800241 1747564 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:25:27.813510 1747564 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:25:27.822993 1747564 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0109 00:25:27.824600 1747564 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:25:27.834933 1747564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:25:27.933670 1747564 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0109 00:25:28.067276 1747564 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0109 00:25:28.067356 1747564 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0109 00:25:28.072137 1747564 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0109 00:25:28.072166 1747564 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0109 00:25:28.072176 1747564 command_runner.go:130] > Device: bch/188d	Inode: 186         Links: 1
	I0109 00:25:28.072190 1747564 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0109 00:25:28.072196 1747564 command_runner.go:130] > Access: 2024-01-09 00:25:28.051433635 +0000
	I0109 00:25:28.072204 1747564 command_runner.go:130] > Modify: 2024-01-09 00:25:28.051433635 +0000
	I0109 00:25:28.072214 1747564 command_runner.go:130] > Change: 2024-01-09 00:25:28.051433635 +0000
	I0109 00:25:28.072220 1747564 command_runner.go:130] >  Birth: -
	I0109 00:25:28.072492 1747564 start.go:543] Will wait 60s for crictl version
	I0109 00:25:28.072553 1747564 ssh_runner.go:195] Run: which crictl
	I0109 00:25:28.077118 1747564 command_runner.go:130] > /usr/bin/crictl
	I0109 00:25:28.077415 1747564 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:25:28.119138 1747564 command_runner.go:130] > Version:  0.1.0
	I0109 00:25:28.119341 1747564 command_runner.go:130] > RuntimeName:  cri-o
	I0109 00:25:28.119482 1747564 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0109 00:25:28.119651 1747564 command_runner.go:130] > RuntimeApiVersion:  v1
	I0109 00:25:28.122523 1747564 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0109 00:25:28.122598 1747564 ssh_runner.go:195] Run: crio --version
	I0109 00:25:28.163402 1747564 command_runner.go:130] > crio version 1.24.6
	I0109 00:25:28.163426 1747564 command_runner.go:130] > Version:          1.24.6
	I0109 00:25:28.163435 1747564 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0109 00:25:28.163441 1747564 command_runner.go:130] > GitTreeState:     clean
	I0109 00:25:28.163448 1747564 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0109 00:25:28.163454 1747564 command_runner.go:130] > GoVersion:        go1.18.2
	I0109 00:25:28.163459 1747564 command_runner.go:130] > Compiler:         gc
	I0109 00:25:28.163465 1747564 command_runner.go:130] > Platform:         linux/arm64
	I0109 00:25:28.163474 1747564 command_runner.go:130] > Linkmode:         dynamic
	I0109 00:25:28.163489 1747564 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0109 00:25:28.163498 1747564 command_runner.go:130] > SeccompEnabled:   true
	I0109 00:25:28.163503 1747564 command_runner.go:130] > AppArmorEnabled:  false
	I0109 00:25:28.165409 1747564 ssh_runner.go:195] Run: crio --version
	I0109 00:25:28.209063 1747564 command_runner.go:130] > crio version 1.24.6
	I0109 00:25:28.209087 1747564 command_runner.go:130] > Version:          1.24.6
	I0109 00:25:28.209096 1747564 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0109 00:25:28.209102 1747564 command_runner.go:130] > GitTreeState:     clean
	I0109 00:25:28.209108 1747564 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0109 00:25:28.209113 1747564 command_runner.go:130] > GoVersion:        go1.18.2
	I0109 00:25:28.209118 1747564 command_runner.go:130] > Compiler:         gc
	I0109 00:25:28.209124 1747564 command_runner.go:130] > Platform:         linux/arm64
	I0109 00:25:28.209130 1747564 command_runner.go:130] > Linkmode:         dynamic
	I0109 00:25:28.209143 1747564 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0109 00:25:28.209151 1747564 command_runner.go:130] > SeccompEnabled:   true
	I0109 00:25:28.209161 1747564 command_runner.go:130] > AppArmorEnabled:  false
	I0109 00:25:28.214747 1747564 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0109 00:25:28.217084 1747564 out.go:177]   - env NO_PROXY=192.168.58.2
	I0109 00:25:28.219109 1747564 cli_runner.go:164] Run: docker network inspect multinode-979047 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0109 00:25:28.239629 1747564 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0109 00:25:28.244217 1747564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:25:28.257346 1747564 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047 for IP: 192.168.58.3
	I0109 00:25:28.257377 1747564 certs.go:190] acquiring lock for shared ca certs: {Name:mkd1a8a8c523b20f31a5839efb0f14edb2634692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:25:28.257518 1747564 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.key
	I0109 00:25:28.257557 1747564 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-1678586/.minikube/proxy-client-ca.key
	I0109 00:25:28.257567 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0109 00:25:28.257581 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0109 00:25:28.257592 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0109 00:25:28.257606 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0109 00:25:28.257658 1747564 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/1683967.pem (1338 bytes)
	W0109 00:25:28.257687 1747564 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/1683967_empty.pem, impossibly tiny 0 bytes
	I0109 00:25:28.257696 1747564 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca-key.pem (1679 bytes)
	I0109 00:25:28.257722 1747564 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem (1082 bytes)
	I0109 00:25:28.257745 1747564 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem (1123 bytes)
	I0109 00:25:28.257767 1747564 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem (1679 bytes)
	I0109 00:25:28.257811 1747564 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem (1708 bytes)
	I0109 00:25:28.257836 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:25:28.257847 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/1683967.pem -> /usr/share/ca-certificates/1683967.pem
	I0109 00:25:28.257858 1747564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem -> /usr/share/ca-certificates/16839672.pem
	I0109 00:25:28.258196 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:25:28.286619 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0109 00:25:28.313950 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:25:28.342337 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:25:28.371035 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:25:28.399629 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/1683967.pem --> /usr/share/ca-certificates/1683967.pem (1338 bytes)
	I0109 00:25:28.427767 1747564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem --> /usr/share/ca-certificates/16839672.pem (1708 bytes)
	I0109 00:25:28.456607 1747564 ssh_runner.go:195] Run: openssl version
	I0109 00:25:28.463265 1747564 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0109 00:25:28.463655 1747564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:25:28.475544 1747564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:25:28.480199 1747564 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  9 00:02 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:25:28.480223 1747564 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  9 00:02 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:25:28.480273 1747564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:25:28.488552 1747564 command_runner.go:130] > b5213941
	I0109 00:25:28.489007 1747564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:25:28.500124 1747564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1683967.pem && ln -fs /usr/share/ca-certificates/1683967.pem /etc/ssl/certs/1683967.pem"
	I0109 00:25:28.511517 1747564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1683967.pem
	I0109 00:25:28.515946 1747564 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  9 00:09 /usr/share/ca-certificates/1683967.pem
	I0109 00:25:28.515972 1747564 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  9 00:09 /usr/share/ca-certificates/1683967.pem
	I0109 00:25:28.516021 1747564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1683967.pem
	I0109 00:25:28.523936 1747564 command_runner.go:130] > 51391683
	I0109 00:25:28.524354 1747564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1683967.pem /etc/ssl/certs/51391683.0"
	I0109 00:25:28.535422 1747564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16839672.pem && ln -fs /usr/share/ca-certificates/16839672.pem /etc/ssl/certs/16839672.pem"
	I0109 00:25:28.547062 1747564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16839672.pem
	I0109 00:25:28.551293 1747564 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  9 00:09 /usr/share/ca-certificates/16839672.pem
	I0109 00:25:28.551465 1747564 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  9 00:09 /usr/share/ca-certificates/16839672.pem
	I0109 00:25:28.551549 1747564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16839672.pem
	I0109 00:25:28.559798 1747564 command_runner.go:130] > 3ec20f2e
	I0109 00:25:28.560317 1747564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16839672.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:25:28.571820 1747564 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:25:28.576209 1747564 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0109 00:25:28.576243 1747564 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0109 00:25:28.576340 1747564 ssh_runner.go:195] Run: crio config
	I0109 00:25:28.626487 1747564 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0109 00:25:28.626566 1747564 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0109 00:25:28.626589 1747564 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0109 00:25:28.626609 1747564 command_runner.go:130] > #
	I0109 00:25:28.626648 1747564 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0109 00:25:28.626674 1747564 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0109 00:25:28.626698 1747564 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0109 00:25:28.626737 1747564 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0109 00:25:28.626762 1747564 command_runner.go:130] > # reload'.
	I0109 00:25:28.626785 1747564 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0109 00:25:28.626828 1747564 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0109 00:25:28.626863 1747564 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0109 00:25:28.626900 1747564 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0109 00:25:28.626923 1747564 command_runner.go:130] > [crio]
	I0109 00:25:28.626943 1747564 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0109 00:25:28.626978 1747564 command_runner.go:130] > # containers images, in this directory.
	I0109 00:25:28.627012 1747564 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0109 00:25:28.627034 1747564 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0109 00:25:28.627065 1747564 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0109 00:25:28.627092 1747564 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0109 00:25:28.627114 1747564 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0109 00:25:28.627389 1747564 command_runner.go:130] > # storage_driver = "vfs"
	I0109 00:25:28.627426 1747564 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0109 00:25:28.627447 1747564 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0109 00:25:28.627480 1747564 command_runner.go:130] > # storage_option = [
	I0109 00:25:28.627899 1747564 command_runner.go:130] > # ]
	I0109 00:25:28.627939 1747564 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0109 00:25:28.627974 1747564 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0109 00:25:28.627996 1747564 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0109 00:25:28.628016 1747564 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0109 00:25:28.628051 1747564 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0109 00:25:28.628074 1747564 command_runner.go:130] > # always happen on a node reboot
	I0109 00:25:28.628099 1747564 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0109 00:25:28.628134 1747564 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0109 00:25:28.628160 1747564 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0109 00:25:28.628190 1747564 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0109 00:25:28.628224 1747564 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0109 00:25:28.628253 1747564 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0109 00:25:28.628279 1747564 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0109 00:25:28.628310 1747564 command_runner.go:130] > # internal_wipe = true
	I0109 00:25:28.628337 1747564 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0109 00:25:28.628359 1747564 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0109 00:25:28.628393 1747564 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0109 00:25:28.628418 1747564 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0109 00:25:28.628442 1747564 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0109 00:25:28.628477 1747564 command_runner.go:130] > [crio.api]
	I0109 00:25:28.628502 1747564 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0109 00:25:28.628523 1747564 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0109 00:25:28.628558 1747564 command_runner.go:130] > # IP address on which the stream server will listen.
	I0109 00:25:28.628580 1747564 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0109 00:25:28.628605 1747564 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0109 00:25:28.628639 1747564 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0109 00:25:28.628663 1747564 command_runner.go:130] > # stream_port = "0"
	I0109 00:25:28.628684 1747564 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0109 00:25:28.628717 1747564 command_runner.go:130] > # stream_enable_tls = false
	I0109 00:25:28.628742 1747564 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0109 00:25:28.628762 1747564 command_runner.go:130] > # stream_idle_timeout = ""
	I0109 00:25:28.628798 1747564 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0109 00:25:28.628824 1747564 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0109 00:25:28.628841 1747564 command_runner.go:130] > # minutes.
	I0109 00:25:28.628881 1747564 command_runner.go:130] > # stream_tls_cert = ""
	I0109 00:25:28.628904 1747564 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0109 00:25:28.628926 1747564 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0109 00:25:28.629950 1747564 command_runner.go:130] > # stream_tls_key = ""
	I0109 00:25:28.629979 1747564 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0109 00:25:28.630001 1747564 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0109 00:25:28.630036 1747564 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0109 00:25:28.630060 1747564 command_runner.go:130] > # stream_tls_ca = ""
	I0109 00:25:28.630092 1747564 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0109 00:25:28.630123 1747564 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0109 00:25:28.630151 1747564 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0109 00:25:28.630172 1747564 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0109 00:25:28.630221 1747564 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0109 00:25:28.630246 1747564 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0109 00:25:28.630266 1747564 command_runner.go:130] > [crio.runtime]
	I0109 00:25:28.630303 1747564 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0109 00:25:28.630323 1747564 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0109 00:25:28.630342 1747564 command_runner.go:130] > # "nofile=1024:2048"
	I0109 00:25:28.630376 1747564 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0109 00:25:28.630399 1747564 command_runner.go:130] > # default_ulimits = [
	I0109 00:25:28.630417 1747564 command_runner.go:130] > # ]
	I0109 00:25:28.630458 1747564 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0109 00:25:28.630485 1747564 command_runner.go:130] > # no_pivot = false
	I0109 00:25:28.630506 1747564 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0109 00:25:28.630538 1747564 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0109 00:25:28.630682 1747564 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0109 00:25:28.630736 1747564 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0109 00:25:28.630758 1747564 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0109 00:25:28.630792 1747564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0109 00:25:28.630811 1747564 command_runner.go:130] > # conmon = ""
	I0109 00:25:28.630838 1747564 command_runner.go:130] > # Cgroup setting for conmon
	I0109 00:25:28.630880 1747564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0109 00:25:28.630899 1747564 command_runner.go:130] > conmon_cgroup = "pod"
	I0109 00:25:28.630930 1747564 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0109 00:25:28.630959 1747564 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0109 00:25:28.630982 1747564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0109 00:25:28.631001 1747564 command_runner.go:130] > # conmon_env = [
	I0109 00:25:28.631020 1747564 command_runner.go:130] > # ]
	I0109 00:25:28.631048 1747564 command_runner.go:130] > # Additional environment variables to set for all the
	I0109 00:25:28.631069 1747564 command_runner.go:130] > # containers. These are overridden if set in the
	I0109 00:25:28.631109 1747564 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0109 00:25:28.631128 1747564 command_runner.go:130] > # default_env = [
	I0109 00:25:28.631147 1747564 command_runner.go:130] > # ]
	I0109 00:25:28.631169 1747564 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0109 00:25:28.631210 1747564 command_runner.go:130] > # selinux = false
	I0109 00:25:28.631232 1747564 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0109 00:25:28.631254 1747564 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0109 00:25:28.631286 1747564 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0109 00:25:28.631306 1747564 command_runner.go:130] > # seccomp_profile = ""
	I0109 00:25:28.631329 1747564 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0109 00:25:28.631366 1747564 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0109 00:25:28.631387 1747564 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0109 00:25:28.631408 1747564 command_runner.go:130] > # which might increase security.
	I0109 00:25:28.631439 1747564 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0109 00:25:28.631458 1747564 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0109 00:25:28.631479 1747564 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0109 00:25:28.631501 1747564 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0109 00:25:28.631538 1747564 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0109 00:25:28.631559 1747564 command_runner.go:130] > # This option supports live configuration reload.
	I0109 00:25:28.631579 1747564 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0109 00:25:28.631609 1747564 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0109 00:25:28.631627 1747564 command_runner.go:130] > # the cgroup blockio controller.
	I0109 00:25:28.631657 1747564 command_runner.go:130] > # blockio_config_file = ""
	I0109 00:25:28.631688 1747564 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0109 00:25:28.631706 1747564 command_runner.go:130] > # irqbalance daemon.
	I0109 00:25:28.631727 1747564 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0109 00:25:28.631749 1747564 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0109 00:25:28.631777 1747564 command_runner.go:130] > # This option supports live configuration reload.
	I0109 00:25:28.631795 1747564 command_runner.go:130] > # rdt_config_file = ""
	I0109 00:25:28.631816 1747564 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0109 00:25:28.631836 1747564 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0109 00:25:28.631869 1747564 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0109 00:25:28.631888 1747564 command_runner.go:130] > # separate_pull_cgroup = ""
	I0109 00:25:28.631911 1747564 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0109 00:25:28.631946 1747564 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0109 00:25:28.631965 1747564 command_runner.go:130] > # will be added.
	I0109 00:25:28.631983 1747564 command_runner.go:130] > # default_capabilities = [
	I0109 00:25:28.632002 1747564 command_runner.go:130] > # 	"CHOWN",
	I0109 00:25:28.632036 1747564 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0109 00:25:28.632054 1747564 command_runner.go:130] > # 	"FSETID",
	I0109 00:25:28.632079 1747564 command_runner.go:130] > # 	"FOWNER",
	I0109 00:25:28.632114 1747564 command_runner.go:130] > # 	"SETGID",
	I0109 00:25:28.632133 1747564 command_runner.go:130] > # 	"SETUID",
	I0109 00:25:28.632152 1747564 command_runner.go:130] > # 	"SETPCAP",
	I0109 00:25:28.632175 1747564 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0109 00:25:28.632208 1747564 command_runner.go:130] > # 	"KILL",
	I0109 00:25:28.632228 1747564 command_runner.go:130] > # ]
	I0109 00:25:28.632262 1747564 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0109 00:25:28.632284 1747564 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0109 00:25:28.632311 1747564 command_runner.go:130] > # add_inheritable_capabilities = true
	I0109 00:25:28.632462 1747564 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0109 00:25:28.632493 1747564 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0109 00:25:28.632533 1747564 command_runner.go:130] > # default_sysctls = [
	I0109 00:25:28.632552 1747564 command_runner.go:130] > # ]
	I0109 00:25:28.632573 1747564 command_runner.go:130] > # List of devices on the host that a
	I0109 00:25:28.632620 1747564 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0109 00:25:28.632641 1747564 command_runner.go:130] > # allowed_devices = [
	I0109 00:25:28.632670 1747564 command_runner.go:130] > # 	"/dev/fuse",
	I0109 00:25:28.632703 1747564 command_runner.go:130] > # ]
	I0109 00:25:28.632749 1747564 command_runner.go:130] > # List of additional devices. specified as
	I0109 00:25:28.632797 1747564 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0109 00:25:28.632826 1747564 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0109 00:25:28.632846 1747564 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0109 00:25:28.632858 1747564 command_runner.go:130] > # additional_devices = [
	I0109 00:25:28.632862 1747564 command_runner.go:130] > # ]
	I0109 00:25:28.632868 1747564 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0109 00:25:28.632873 1747564 command_runner.go:130] > # cdi_spec_dirs = [
	I0109 00:25:28.632878 1747564 command_runner.go:130] > # 	"/etc/cdi",
	I0109 00:25:28.632882 1747564 command_runner.go:130] > # 	"/var/run/cdi",
	I0109 00:25:28.632887 1747564 command_runner.go:130] > # ]
	I0109 00:25:28.632902 1747564 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0109 00:25:28.632919 1747564 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0109 00:25:28.632924 1747564 command_runner.go:130] > # Defaults to false.
	I0109 00:25:28.632931 1747564 command_runner.go:130] > # device_ownership_from_security_context = false
	I0109 00:25:28.632942 1747564 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0109 00:25:28.632951 1747564 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0109 00:25:28.632961 1747564 command_runner.go:130] > # hooks_dir = [
	I0109 00:25:28.632978 1747564 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0109 00:25:28.632987 1747564 command_runner.go:130] > # ]
	I0109 00:25:28.632995 1747564 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0109 00:25:28.633005 1747564 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0109 00:25:28.633012 1747564 command_runner.go:130] > # its default mounts from the following two files:
	I0109 00:25:28.633016 1747564 command_runner.go:130] > #
	I0109 00:25:28.633026 1747564 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0109 00:25:28.633036 1747564 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0109 00:25:28.633043 1747564 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0109 00:25:28.633056 1747564 command_runner.go:130] > #
	I0109 00:25:28.633064 1747564 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0109 00:25:28.633075 1747564 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0109 00:25:28.633084 1747564 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0109 00:25:28.633094 1747564 command_runner.go:130] > #      only add mounts it finds in this file.
	I0109 00:25:28.633101 1747564 command_runner.go:130] > #
	I0109 00:25:28.633107 1747564 command_runner.go:130] > # default_mounts_file = ""
	I0109 00:25:28.633116 1747564 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0109 00:25:28.633132 1747564 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0109 00:25:28.633139 1747564 command_runner.go:130] > # pids_limit = 0
	I0109 00:25:28.633147 1747564 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0109 00:25:28.633157 1747564 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0109 00:25:28.633165 1747564 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0109 00:25:28.633177 1747564 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0109 00:25:28.633185 1747564 command_runner.go:130] > # log_size_max = -1
	I0109 00:25:28.633194 1747564 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0109 00:25:28.633199 1747564 command_runner.go:130] > # log_to_journald = false
	I0109 00:25:28.633207 1747564 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0109 00:25:28.633218 1747564 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0109 00:25:28.633224 1747564 command_runner.go:130] > # Path to directory for container attach sockets.
	I0109 00:25:28.633230 1747564 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0109 00:25:28.633239 1747564 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0109 00:25:28.633244 1747564 command_runner.go:130] > # bind_mount_prefix = ""
	I0109 00:25:28.633254 1747564 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0109 00:25:28.633261 1747564 command_runner.go:130] > # read_only = false
	I0109 00:25:28.633268 1747564 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0109 00:25:28.633282 1747564 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0109 00:25:28.633290 1747564 command_runner.go:130] > # live configuration reload.
	I0109 00:25:28.633295 1747564 command_runner.go:130] > # log_level = "info"
	I0109 00:25:28.633302 1747564 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0109 00:25:28.633308 1747564 command_runner.go:130] > # This option supports live configuration reload.
	I0109 00:25:28.633316 1747564 command_runner.go:130] > # log_filter = ""
	I0109 00:25:28.633323 1747564 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0109 00:25:28.633334 1747564 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0109 00:25:28.633339 1747564 command_runner.go:130] > # separated by comma.
	I0109 00:25:28.633347 1747564 command_runner.go:130] > # uid_mappings = ""
	I0109 00:25:28.633355 1747564 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0109 00:25:28.633363 1747564 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0109 00:25:28.633368 1747564 command_runner.go:130] > # separated by comma.
	I0109 00:25:28.633375 1747564 command_runner.go:130] > # gid_mappings = ""
	I0109 00:25:28.633385 1747564 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0109 00:25:28.633395 1747564 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0109 00:25:28.633402 1747564 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0109 00:25:28.633408 1747564 command_runner.go:130] > # minimum_mappable_uid = -1
	I0109 00:25:28.633420 1747564 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0109 00:25:28.633428 1747564 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0109 00:25:28.633439 1747564 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0109 00:25:28.633444 1747564 command_runner.go:130] > # minimum_mappable_gid = -1
	I0109 00:25:28.633451 1747564 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0109 00:25:28.633459 1747564 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0109 00:25:28.633468 1747564 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0109 00:25:28.633475 1747564 command_runner.go:130] > # ctr_stop_timeout = 30
	I0109 00:25:28.633482 1747564 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0109 00:25:28.633490 1747564 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0109 00:25:28.633500 1747564 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0109 00:25:28.633506 1747564 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0109 00:25:28.633513 1747564 command_runner.go:130] > # drop_infra_ctr = true
	I0109 00:25:28.633521 1747564 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0109 00:25:28.633527 1747564 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0109 00:25:28.633536 1747564 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0109 00:25:28.633544 1747564 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0109 00:25:28.633553 1747564 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0109 00:25:28.633562 1747564 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0109 00:25:28.633571 1747564 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0109 00:25:28.633579 1747564 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0109 00:25:28.633586 1747564 command_runner.go:130] > # pinns_path = ""
	I0109 00:25:28.633594 1747564 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0109 00:25:28.633605 1747564 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0109 00:25:28.633612 1747564 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0109 00:25:28.633617 1747564 command_runner.go:130] > # default_runtime = "runc"
	I0109 00:25:28.633627 1747564 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0109 00:25:28.633639 1747564 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0109 00:25:28.633650 1747564 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0109 00:25:28.633658 1747564 command_runner.go:130] > # creation as a file is not desired either.
	I0109 00:25:28.633668 1747564 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0109 00:25:28.633678 1747564 command_runner.go:130] > # the hostname is being managed dynamically.
	I0109 00:25:28.633684 1747564 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0109 00:25:28.633688 1747564 command_runner.go:130] > # ]
	I0109 00:25:28.633696 1747564 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0109 00:25:28.633706 1747564 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0109 00:25:28.633717 1747564 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0109 00:25:28.633728 1747564 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0109 00:25:28.633732 1747564 command_runner.go:130] > #
	I0109 00:25:28.633738 1747564 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0109 00:25:28.633746 1747564 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0109 00:25:28.633751 1747564 command_runner.go:130] > #  runtime_type = "oci"
	I0109 00:25:28.633757 1747564 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0109 00:25:28.633766 1747564 command_runner.go:130] > #  privileged_without_host_devices = false
	I0109 00:25:28.633771 1747564 command_runner.go:130] > #  allowed_annotations = []
	I0109 00:25:28.633776 1747564 command_runner.go:130] > # Where:
	I0109 00:25:28.633784 1747564 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0109 00:25:28.633792 1747564 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0109 00:25:28.633802 1747564 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0109 00:25:28.633812 1747564 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0109 00:25:28.633817 1747564 command_runner.go:130] > #   in $PATH.
	I0109 00:25:28.633824 1747564 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0109 00:25:28.633830 1747564 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0109 00:25:28.633843 1747564 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0109 00:25:28.633856 1747564 command_runner.go:130] > #   state.
	I0109 00:25:28.633864 1747564 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0109 00:25:28.633871 1747564 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0109 00:25:28.633879 1747564 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0109 00:25:28.633888 1747564 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0109 00:25:28.633896 1747564 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0109 00:25:28.633904 1747564 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0109 00:25:28.633913 1747564 command_runner.go:130] > #   The currently recognized values are:
	I0109 00:25:28.633921 1747564 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0109 00:25:28.633933 1747564 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0109 00:25:28.633940 1747564 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0109 00:25:28.633951 1747564 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0109 00:25:28.633960 1747564 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0109 00:25:28.633971 1747564 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0109 00:25:28.633981 1747564 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0109 00:25:28.633989 1747564 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0109 00:25:28.633995 1747564 command_runner.go:130] > #   should be moved to the container's cgroup
	I0109 00:25:28.634003 1747564 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0109 00:25:28.634010 1747564 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0109 00:25:28.634019 1747564 command_runner.go:130] > runtime_type = "oci"
	I0109 00:25:28.634024 1747564 command_runner.go:130] > runtime_root = "/run/runc"
	I0109 00:25:28.634029 1747564 command_runner.go:130] > runtime_config_path = ""
	I0109 00:25:28.634034 1747564 command_runner.go:130] > monitor_path = ""
	I0109 00:25:28.634039 1747564 command_runner.go:130] > monitor_cgroup = ""
	I0109 00:25:28.634049 1747564 command_runner.go:130] > monitor_exec_cgroup = ""
	I0109 00:25:28.634099 1747564 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0109 00:25:28.634108 1747564 command_runner.go:130] > # running containers
	I0109 00:25:28.634113 1747564 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0109 00:25:28.634121 1747564 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0109 00:25:28.634130 1747564 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0109 00:25:28.634144 1747564 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0109 00:25:28.634151 1747564 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0109 00:25:28.634158 1747564 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0109 00:25:28.634167 1747564 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0109 00:25:28.634173 1747564 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0109 00:25:28.634181 1747564 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0109 00:25:28.634192 1747564 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0109 00:25:28.634200 1747564 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0109 00:25:28.634207 1747564 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0109 00:25:28.634215 1747564 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0109 00:25:28.634234 1747564 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0109 00:25:28.634244 1747564 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0109 00:25:28.634254 1747564 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0109 00:25:28.634266 1747564 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0109 00:25:28.634279 1747564 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0109 00:25:28.634286 1747564 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0109 00:25:28.634295 1747564 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0109 00:25:28.634301 1747564 command_runner.go:130] > # Example:
	I0109 00:25:28.634307 1747564 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0109 00:25:28.634316 1747564 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0109 00:25:28.634322 1747564 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0109 00:25:28.634328 1747564 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0109 00:25:28.634336 1747564 command_runner.go:130] > # cpuset = 0
	I0109 00:25:28.634341 1747564 command_runner.go:130] > # cpushares = "0-1"
	I0109 00:25:28.634349 1747564 command_runner.go:130] > # Where:
	I0109 00:25:28.634358 1747564 command_runner.go:130] > # The workload name is workload-type.
	I0109 00:25:28.634366 1747564 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0109 00:25:28.634373 1747564 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0109 00:25:28.634380 1747564 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0109 00:25:28.634392 1747564 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0109 00:25:28.634401 1747564 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0109 00:25:28.634405 1747564 command_runner.go:130] > # 
	I0109 00:25:28.634413 1747564 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0109 00:25:28.634420 1747564 command_runner.go:130] > #
	I0109 00:25:28.634427 1747564 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0109 00:25:28.634448 1747564 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0109 00:25:28.634457 1747564 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0109 00:25:28.634467 1747564 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0109 00:25:28.634474 1747564 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0109 00:25:28.634481 1747564 command_runner.go:130] > [crio.image]
	I0109 00:25:28.634489 1747564 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0109 00:25:28.634497 1747564 command_runner.go:130] > # default_transport = "docker://"
	I0109 00:25:28.634507 1747564 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0109 00:25:28.634518 1747564 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0109 00:25:28.634523 1747564 command_runner.go:130] > # global_auth_file = ""
	I0109 00:25:28.634534 1747564 command_runner.go:130] > # The image used to instantiate infra containers.
	I0109 00:25:28.634541 1747564 command_runner.go:130] > # This option supports live configuration reload.
	I0109 00:25:28.634546 1747564 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0109 00:25:28.634555 1747564 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0109 00:25:28.634564 1747564 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0109 00:25:28.634570 1747564 command_runner.go:130] > # This option supports live configuration reload.
	I0109 00:25:28.634578 1747564 command_runner.go:130] > # pause_image_auth_file = ""
	I0109 00:25:28.634585 1747564 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0109 00:25:28.634592 1747564 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0109 00:25:28.634602 1747564 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0109 00:25:28.634609 1747564 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0109 00:25:28.634614 1747564 command_runner.go:130] > # pause_command = "/pause"
	I0109 00:25:28.634624 1747564 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0109 00:25:28.634636 1747564 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0109 00:25:28.634644 1747564 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0109 00:25:28.634656 1747564 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0109 00:25:28.634663 1747564 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0109 00:25:28.634672 1747564 command_runner.go:130] > # signature_policy = ""
	I0109 00:25:28.634681 1747564 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0109 00:25:28.634688 1747564 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0109 00:25:28.634697 1747564 command_runner.go:130] > # changing them here.
	I0109 00:25:28.634702 1747564 command_runner.go:130] > # insecure_registries = [
	I0109 00:25:28.634706 1747564 command_runner.go:130] > # ]
	I0109 00:25:28.634716 1747564 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0109 00:25:28.634725 1747564 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0109 00:25:28.634730 1747564 command_runner.go:130] > # image_volumes = "mkdir"
	I0109 00:25:28.634736 1747564 command_runner.go:130] > # Temporary directory to use for storing big files
	I0109 00:25:28.634744 1747564 command_runner.go:130] > # big_files_temporary_dir = ""
	I0109 00:25:28.634752 1747564 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0109 00:25:28.634759 1747564 command_runner.go:130] > # CNI plugins.
	I0109 00:25:28.634764 1747564 command_runner.go:130] > [crio.network]
	I0109 00:25:28.634771 1747564 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0109 00:25:28.634780 1747564 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0109 00:25:28.634787 1747564 command_runner.go:130] > # cni_default_network = ""
	I0109 00:25:28.634797 1747564 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0109 00:25:28.634816 1747564 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0109 00:25:28.634824 1747564 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0109 00:25:28.634832 1747564 command_runner.go:130] > # plugin_dirs = [
	I0109 00:25:28.634837 1747564 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0109 00:25:28.634841 1747564 command_runner.go:130] > # ]
	I0109 00:25:28.634857 1747564 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0109 00:25:28.634864 1747564 command_runner.go:130] > [crio.metrics]
	I0109 00:25:28.634870 1747564 command_runner.go:130] > # Globally enable or disable metrics support.
	I0109 00:25:28.634876 1747564 command_runner.go:130] > # enable_metrics = false
	I0109 00:25:28.634883 1747564 command_runner.go:130] > # Specify enabled metrics collectors.
	I0109 00:25:28.634891 1747564 command_runner.go:130] > # Per default all metrics are enabled.
	I0109 00:25:28.634899 1747564 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0109 00:25:28.634909 1747564 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0109 00:25:28.634916 1747564 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0109 00:25:28.634923 1747564 command_runner.go:130] > # metrics_collectors = [
	I0109 00:25:28.634928 1747564 command_runner.go:130] > # 	"operations",
	I0109 00:25:28.634939 1747564 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0109 00:25:28.634945 1747564 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0109 00:25:28.634952 1747564 command_runner.go:130] > # 	"operations_errors",
	I0109 00:25:28.634958 1747564 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0109 00:25:28.634968 1747564 command_runner.go:130] > # 	"image_pulls_by_name",
	I0109 00:25:28.634973 1747564 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0109 00:25:28.634978 1747564 command_runner.go:130] > # 	"image_pulls_failures",
	I0109 00:25:28.634986 1747564 command_runner.go:130] > # 	"image_pulls_successes",
	I0109 00:25:28.634993 1747564 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0109 00:25:28.634999 1747564 command_runner.go:130] > # 	"image_layer_reuse",
	I0109 00:25:28.635006 1747564 command_runner.go:130] > # 	"containers_oom_total",
	I0109 00:25:28.635011 1747564 command_runner.go:130] > # 	"containers_oom",
	I0109 00:25:28.635016 1747564 command_runner.go:130] > # 	"processes_defunct",
	I0109 00:25:28.635025 1747564 command_runner.go:130] > # 	"operations_total",
	I0109 00:25:28.635030 1747564 command_runner.go:130] > # 	"operations_latency_seconds",
	I0109 00:25:28.635036 1747564 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0109 00:25:28.635041 1747564 command_runner.go:130] > # 	"operations_errors_total",
	I0109 00:25:28.635048 1747564 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0109 00:25:28.635057 1747564 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0109 00:25:28.635064 1747564 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0109 00:25:28.635070 1747564 command_runner.go:130] > # 	"image_pulls_success_total",
	I0109 00:25:28.635075 1747564 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0109 00:25:28.635083 1747564 command_runner.go:130] > # 	"containers_oom_count_total",
	I0109 00:25:28.635087 1747564 command_runner.go:130] > # ]
	I0109 00:25:28.635093 1747564 command_runner.go:130] > # The port on which the metrics server will listen.
	I0109 00:25:28.635102 1747564 command_runner.go:130] > # metrics_port = 9090
	I0109 00:25:28.635108 1747564 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0109 00:25:28.635113 1747564 command_runner.go:130] > # metrics_socket = ""
	I0109 00:25:28.635119 1747564 command_runner.go:130] > # The certificate for the secure metrics server.
	I0109 00:25:28.635126 1747564 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0109 00:25:28.635136 1747564 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0109 00:25:28.635144 1747564 command_runner.go:130] > # certificate on any modification event.
	I0109 00:25:28.635151 1747564 command_runner.go:130] > # metrics_cert = ""
	I0109 00:25:28.635157 1747564 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0109 00:25:28.635166 1747564 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0109 00:25:28.635171 1747564 command_runner.go:130] > # metrics_key = ""
	I0109 00:25:28.635179 1747564 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0109 00:25:28.635190 1747564 command_runner.go:130] > [crio.tracing]
	I0109 00:25:28.635197 1747564 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0109 00:25:28.635202 1747564 command_runner.go:130] > # enable_tracing = false
	I0109 00:25:28.635208 1747564 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0109 00:25:28.635216 1747564 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0109 00:25:28.635224 1747564 command_runner.go:130] > # Number of samples to collect per million spans.
	I0109 00:25:28.635231 1747564 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0109 00:25:28.635241 1747564 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0109 00:25:28.635245 1747564 command_runner.go:130] > [crio.stats]
	I0109 00:25:28.635252 1747564 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0109 00:25:28.635261 1747564 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0109 00:25:28.635266 1747564 command_runner.go:130] > # stats_collection_period = 0
	I0109 00:25:28.635299 1747564 command_runner.go:130] ! time="2024-01-09 00:25:28.622931503Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0109 00:25:28.635318 1747564 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0109 00:25:28.635410 1747564 cni.go:84] Creating CNI manager for ""
	I0109 00:25:28.635421 1747564 cni.go:136] 2 nodes found, recommending kindnet
	I0109 00:25:28.635429 1747564 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:25:28.635453 1747564 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-979047 NodeName:multinode-979047-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0109 00:25:28.635579 1747564 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-979047-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:25:28.635641 1747564 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-979047-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-979047 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:25:28.635726 1747564 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0109 00:25:28.645479 1747564 command_runner.go:130] > kubeadm
	I0109 00:25:28.645501 1747564 command_runner.go:130] > kubectl
	I0109 00:25:28.645507 1747564 command_runner.go:130] > kubelet
	I0109 00:25:28.646530 1747564 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:25:28.646618 1747564 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0109 00:25:28.656962 1747564 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0109 00:25:28.678707 1747564 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0109 00:25:28.701329 1747564 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0109 00:25:28.705671 1747564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:25:28.718964 1747564 host.go:66] Checking if "multinode-979047" exists ...
	I0109 00:25:28.719231 1747564 start.go:304] JoinCluster: &{Name:multinode-979047 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-979047 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:25:28.719323 1747564 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0109 00:25:28.719379 1747564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-979047
	I0109 00:25:28.719749 1747564 config.go:182] Loaded profile config "multinode-979047": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:25:28.739224 1747564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34444 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/multinode-979047/id_rsa Username:docker}
	I0109 00:25:28.919810 1747564 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token eoh4iy.u5yx3kzmkt4lhyeu --discovery-token-ca-cert-hash sha256:2f5d2b90e0873ecdcc03ee1f37a9ff73145aa86994d578f7f9f8008617cee046 
	I0109 00:25:28.919856 1747564 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0109 00:25:28.919896 1747564 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token eoh4iy.u5yx3kzmkt4lhyeu --discovery-token-ca-cert-hash sha256:2f5d2b90e0873ecdcc03ee1f37a9ff73145aa86994d578f7f9f8008617cee046 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-979047-m02"
	I0109 00:25:28.962575 1747564 command_runner.go:130] > [preflight] Running pre-flight checks
	I0109 00:25:29.010738 1747564 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0109 00:25:29.010773 1747564 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1051-aws
	I0109 00:25:29.010781 1747564 command_runner.go:130] > OS: Linux
	I0109 00:25:29.010804 1747564 command_runner.go:130] > CGROUPS_CPU: enabled
	I0109 00:25:29.010821 1747564 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0109 00:25:29.010829 1747564 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0109 00:25:29.010839 1747564 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0109 00:25:29.010846 1747564 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0109 00:25:29.010858 1747564 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0109 00:25:29.010881 1747564 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0109 00:25:29.010895 1747564 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0109 00:25:29.010911 1747564 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0109 00:25:29.123795 1747564 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0109 00:25:29.123869 1747564 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0109 00:25:29.154666 1747564 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:25:29.154880 1747564 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:25:29.154932 1747564 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0109 00:25:29.270150 1747564 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0109 00:25:32.789055 1747564 command_runner.go:130] > This node has joined the cluster:
	I0109 00:25:32.789078 1747564 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0109 00:25:32.789087 1747564 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0109 00:25:32.789095 1747564 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0109 00:25:32.792321 1747564 command_runner.go:130] ! W0109 00:25:28.961841    1027 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0109 00:25:32.792360 1747564 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0109 00:25:32.792375 1747564 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0109 00:25:32.792396 1747564 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token eoh4iy.u5yx3kzmkt4lhyeu --discovery-token-ca-cert-hash sha256:2f5d2b90e0873ecdcc03ee1f37a9ff73145aa86994d578f7f9f8008617cee046 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-979047-m02": (3.87248357s)
	I0109 00:25:32.792416 1747564 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0109 00:25:33.046386 1747564 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0109 00:25:33.046533 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=multinode-979047 minikube.k8s.io/updated_at=2024_01_09T00_25_33_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:25:33.243585 1747564 command_runner.go:130] > node/multinode-979047-m02 labeled
	I0109 00:25:33.247511 1747564 start.go:306] JoinCluster complete in 4.528274905s
	I0109 00:25:33.247543 1747564 cni.go:84] Creating CNI manager for ""
	I0109 00:25:33.247551 1747564 cni.go:136] 2 nodes found, recommending kindnet
	I0109 00:25:33.247606 1747564 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0109 00:25:33.252958 1747564 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0109 00:25:33.252985 1747564 command_runner.go:130] >   Size: 4030506   	Blocks: 7880       IO Block: 4096   regular file
	I0109 00:25:33.252994 1747564 command_runner.go:130] > Device: 3ah/58d	Inode: 2086842     Links: 1
	I0109 00:25:33.253001 1747564 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0109 00:25:33.253008 1747564 command_runner.go:130] > Access: 2023-12-04 16:39:54.000000000 +0000
	I0109 00:25:33.253014 1747564 command_runner.go:130] > Modify: 2023-12-04 16:39:54.000000000 +0000
	I0109 00:25:33.253021 1747564 command_runner.go:130] > Change: 2024-01-09 00:01:34.410757867 +0000
	I0109 00:25:33.253030 1747564 command_runner.go:130] >  Birth: 2024-01-09 00:01:34.366757483 +0000
	I0109 00:25:33.255949 1747564 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0109 00:25:33.255972 1747564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0109 00:25:33.301341 1747564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0109 00:25:33.649651 1747564 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0109 00:25:33.655501 1747564 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0109 00:25:33.659810 1747564 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0109 00:25:33.676959 1747564 command_runner.go:130] > daemonset.apps/kindnet configured
	I0109 00:25:33.683370 1747564 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17830-1678586/kubeconfig
	I0109 00:25:33.683668 1747564 kapi.go:59] client config for multinode-979047: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/client.key", CAFile:"/home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0109 00:25:33.684055 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0109 00:25:33.684070 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:33.684079 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:33.684086 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:33.687107 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:33.687130 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:33.687142 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:33.687149 1747564 round_trippers.go:580]     Content-Length: 291
	I0109 00:25:33.687158 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:33 GMT
	I0109 00:25:33.687168 1747564 round_trippers.go:580]     Audit-Id: a7c78e9b-b2f8-4cc3-ad75-1e61b6df928d
	I0109 00:25:33.687174 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:33.687184 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:33.687191 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:33.687497 1747564 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9fa0a87b-3794-41d2-9f6b-1de3c8c3d9c9","resourceVersion":"447","creationTimestamp":"2024-01-09T00:24:54Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0109 00:25:33.687612 1747564 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-979047" context rescaled to 1 replicas
	I0109 00:25:33.687658 1747564 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0109 00:25:33.692345 1747564 out.go:177] * Verifying Kubernetes components...
	I0109 00:25:33.694430 1747564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:25:33.713183 1747564 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17830-1678586/kubeconfig
	I0109 00:25:33.713509 1747564 kapi.go:59] client config for multinode-979047: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/client.crt", KeyFile:"/home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/multinode-979047/client.key", CAFile:"/home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0109 00:25:33.713813 1747564 node_ready.go:35] waiting up to 6m0s for node "multinode-979047-m02" to be "Ready" ...
	I0109 00:25:33.713928 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:33.713960 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:33.713983 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:33.714005 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:33.716586 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:33.716639 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:33.716653 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:33.716661 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:33 GMT
	I0109 00:25:33.716667 1747564 round_trippers.go:580]     Audit-Id: 19a177dd-9c11-4d62-b92d-4ac833bf44b7
	I0109 00:25:33.716673 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:33.716679 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:33.716690 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:33.717118 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"498","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I0109 00:25:34.214810 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:34.214836 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:34.214846 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:34.214858 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:34.217439 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:34.217468 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:34.217478 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:34.217485 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:34.217491 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:34 GMT
	I0109 00:25:34.217497 1747564 round_trippers.go:580]     Audit-Id: d0305676-4276-4dd1-af8a-83e7ea5c4c4c
	I0109 00:25:34.217504 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:34.217510 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:34.217648 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"498","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I0109 00:25:34.714769 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:34.714800 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:34.714811 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:34.714819 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:34.717182 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:34.717213 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:34.717221 1747564 round_trippers.go:580]     Audit-Id: c4ea75c4-6dc4-4cb4-8d1e-8982560694e2
	I0109 00:25:34.717227 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:34.717233 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:34.717240 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:34.717246 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:34.717252 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:34 GMT
	I0109 00:25:34.717370 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"498","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I0109 00:25:35.214916 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:35.214941 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:35.214950 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:35.214957 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:35.217500 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:35.217524 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:35.217534 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:35.217540 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:35.217547 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:35 GMT
	I0109 00:25:35.217554 1747564 round_trippers.go:580]     Audit-Id: 90b8c6f1-82b6-4b1c-ac93-d300dae52754
	I0109 00:25:35.217560 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:35.217570 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:35.217682 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"498","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I0109 00:25:35.714348 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:35.714386 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:35.714399 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:35.714410 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:35.716998 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:35.717022 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:35.717031 1747564 round_trippers.go:580]     Audit-Id: 90473907-f3f5-4276-bfd5-38b9ab4062d2
	I0109 00:25:35.717038 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:35.717044 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:35.717070 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:35.717080 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:35.717086 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:35 GMT
	I0109 00:25:35.717460 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"498","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I0109 00:25:35.717904 1747564 node_ready.go:58] node "multinode-979047-m02" has status "Ready":"False"
	I0109 00:25:36.214068 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:36.214089 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:36.214099 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:36.214107 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:36.216530 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:36.216550 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:36.216558 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:36.216565 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:36.216571 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:36.216577 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:36 GMT
	I0109 00:25:36.216583 1747564 round_trippers.go:580]     Audit-Id: 28e11dfc-a383-450e-9b68-58b4fd838518
	I0109 00:25:36.216590 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:36.216709 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"498","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I0109 00:25:36.715004 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:36.715031 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:36.715040 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:36.715047 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:36.717505 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:36.720287 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:36.720300 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:36 GMT
	I0109 00:25:36.720309 1747564 round_trippers.go:580]     Audit-Id: c5af9291-5b56-45de-b6b7-a9e3277f5df4
	I0109 00:25:36.720316 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:36.720325 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:36.720332 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:36.720338 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:36.720506 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"511","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0109 00:25:37.214588 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:37.214614 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:37.214624 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:37.214631 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:37.217197 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:37.217224 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:37.217233 1747564 round_trippers.go:580]     Audit-Id: e8cccc00-11a3-4ed4-978c-ba03ba61a76a
	I0109 00:25:37.217240 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:37.217245 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:37.217256 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:37.217263 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:37.217270 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:37 GMT
	I0109 00:25:37.217371 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"511","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0109 00:25:37.714157 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:37.714181 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:37.714198 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:37.714205 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:37.716831 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:37.716849 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:37.716857 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:37.716864 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:37 GMT
	I0109 00:25:37.716870 1747564 round_trippers.go:580]     Audit-Id: d4f1e3d5-525a-4ac0-94ae-bbfd4a3d7a3e
	I0109 00:25:37.716876 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:37.716882 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:37.716888 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:37.717071 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"511","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0109 00:25:38.214640 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:38.214670 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:38.214680 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:38.214688 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:38.217218 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:38.217241 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:38.217250 1747564 round_trippers.go:580]     Audit-Id: 216567f0-6df3-4a9d-a415-5fd15dfb5ea0
	I0109 00:25:38.217256 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:38.217263 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:38.217269 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:38.217276 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:38.217289 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:38 GMT
	I0109 00:25:38.217553 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"511","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0109 00:25:38.217956 1747564 node_ready.go:58] node "multinode-979047-m02" has status "Ready":"False"
	I0109 00:25:38.714969 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:38.714994 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:38.715004 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:38.715011 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:38.717325 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:38.717347 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:38.717355 1747564 round_trippers.go:580]     Audit-Id: c57c7fc6-9f69-42ed-884f-b96e6004ee70
	I0109 00:25:38.717361 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:38.717368 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:38.717374 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:38.717380 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:38.717386 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:38 GMT
	I0109 00:25:38.717535 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"511","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0109 00:25:39.214815 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:39.214837 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:39.214846 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:39.214859 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:39.217280 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:39.217299 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:39.217307 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:39.217315 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:39 GMT
	I0109 00:25:39.217321 1747564 round_trippers.go:580]     Audit-Id: 9a9b13b8-1d49-41cf-8cee-126af232a2b9
	I0109 00:25:39.217327 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:39.217337 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:39.217343 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:39.217442 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"511","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0109 00:25:39.714669 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:39.714702 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:39.714712 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:39.714719 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:39.717617 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:39.717644 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:39.717657 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:39.717664 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:39.717671 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:39 GMT
	I0109 00:25:39.717677 1747564 round_trippers.go:580]     Audit-Id: 83f27348-5bbb-425a-946b-51c17b13d2c3
	I0109 00:25:39.717684 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:39.717693 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:39.717870 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"511","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0109 00:25:40.215065 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:40.215089 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:40.215099 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:40.215106 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:40.217579 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:40.217607 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:40.217617 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:40 GMT
	I0109 00:25:40.217623 1747564 round_trippers.go:580]     Audit-Id: b91ebd6c-51f9-44a8-955b-9850f203b7b6
	I0109 00:25:40.217630 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:40.217636 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:40.217642 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:40.217653 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:40.217766 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"511","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0109 00:25:40.218173 1747564 node_ready.go:58] node "multinode-979047-m02" has status "Ready":"False"
	I0109 00:25:40.714898 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:40.714923 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:40.714933 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:40.714941 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:40.718485 1747564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:25:40.718514 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:40.718525 1747564 round_trippers.go:580]     Audit-Id: 1d50ba17-f5c5-4a22-94b7-e2326e195664
	I0109 00:25:40.718531 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:40.718538 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:40.718544 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:40.718550 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:40.718556 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:40 GMT
	I0109 00:25:40.718881 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"511","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0109 00:25:41.214525 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:41.214552 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:41.214562 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:41.214569 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:41.217011 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:41.217035 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:41.217043 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:41.217050 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:41.217056 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:41.217063 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:41 GMT
	I0109 00:25:41.217069 1747564 round_trippers.go:580]     Audit-Id: 2e901ef5-d31a-47fe-9bb6-07eee9f8187b
	I0109 00:25:41.217075 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:41.217196 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"511","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0109 00:25:41.714094 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:41.714119 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:41.714128 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:41.714136 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:41.716765 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:41.716794 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:41.716803 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:41 GMT
	I0109 00:25:41.716809 1747564 round_trippers.go:580]     Audit-Id: ff99aad4-26b9-4d23-adf7-a937f8433747
	I0109 00:25:41.716816 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:41.716825 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:41.716831 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:41.716840 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:41.717193 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"511","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0109 00:25:42.214929 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:42.214959 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:42.214969 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:42.214976 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:42.217828 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:42.217856 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:42.217865 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:42 GMT
	I0109 00:25:42.217872 1747564 round_trippers.go:580]     Audit-Id: 8cecfc87-e9e7-4fce-a69f-f21bcbf47f74
	I0109 00:25:42.217879 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:42.217885 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:42.217891 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:42.217897 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:42.218142 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"511","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0109 00:25:42.218596 1747564 node_ready.go:58] node "multinode-979047-m02" has status "Ready":"False"
	I0109 00:25:42.714428 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:42.714472 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:42.714482 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:42.714489 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:42.716710 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:42.716732 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:42.716742 1747564 round_trippers.go:580]     Audit-Id: 3a1fac44-26fd-4c21-a8c6-0580d2f0bbe6
	I0109 00:25:42.716748 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:42.716755 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:42.716771 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:42.716783 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:42.716789 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:42 GMT
	I0109 00:25:42.717167 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:43.214093 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:43.214115 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:43.214124 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:43.214132 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:43.216786 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:43.216845 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:43.216867 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:43.216890 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:43.216913 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:43.216935 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:43 GMT
	I0109 00:25:43.216960 1747564 round_trippers.go:580]     Audit-Id: eb43dac1-b27b-4a1e-a7e2-327b0c00b7fb
	I0109 00:25:43.216981 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:43.217131 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:43.714689 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:43.714715 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:43.714725 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:43.714733 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:43.717394 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:43.717421 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:43.717429 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:43.717436 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:43.717443 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:43.717449 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:43.717457 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:43 GMT
	I0109 00:25:43.717463 1747564 round_trippers.go:580]     Audit-Id: ea799709-8997-4eef-930c-aadbbfd16084
	I0109 00:25:43.717579 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:44.214798 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:44.214884 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:44.214901 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:44.214909 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:44.217562 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:44.217586 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:44.217595 1747564 round_trippers.go:580]     Audit-Id: f99f2e31-a32e-4a9a-be85-59ac8973a045
	I0109 00:25:44.217602 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:44.217608 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:44.217614 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:44.217621 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:44.217628 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:44 GMT
	I0109 00:25:44.217958 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:44.714607 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:44.714632 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:44.714642 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:44.714650 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:44.717156 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:44.717180 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:44.717189 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:44.717196 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:44.717202 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:44.717208 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:44.717214 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:44 GMT
	I0109 00:25:44.717221 1747564 round_trippers.go:580]     Audit-Id: 526b815f-c521-4c0b-8c6f-b44a3ff5492f
	I0109 00:25:44.717364 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:44.717778 1747564 node_ready.go:58] node "multinode-979047-m02" has status "Ready":"False"
	I0109 00:25:45.214112 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:45.214142 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:45.214153 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:45.214161 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:45.216856 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:45.216882 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:45.216892 1747564 round_trippers.go:580]     Audit-Id: 782c6117-ef77-426f-8174-54755ac6b4f8
	I0109 00:25:45.216899 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:45.216905 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:45.216912 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:45.216919 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:45.216925 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:45 GMT
	I0109 00:25:45.217293 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:45.715005 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:45.715032 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:45.715043 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:45.715050 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:45.717429 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:45.717454 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:45.717462 1747564 round_trippers.go:580]     Audit-Id: bd45e393-dab4-4b47-be23-078f5df899aa
	I0109 00:25:45.717469 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:45.717475 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:45.717481 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:45.717495 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:45.717501 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:45 GMT
	I0109 00:25:45.717784 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:46.214781 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:46.214806 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:46.214815 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:46.214822 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:46.217352 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:46.217376 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:46.217385 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:46 GMT
	I0109 00:25:46.217391 1747564 round_trippers.go:580]     Audit-Id: 2b2c9b8a-ec54-40d6-a450-3eb62ac35736
	I0109 00:25:46.217397 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:46.217404 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:46.217410 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:46.217416 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:46.217731 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:46.714996 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:46.715021 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:46.715031 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:46.715038 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:46.717892 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:46.717918 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:46.717926 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:46 GMT
	I0109 00:25:46.717933 1747564 round_trippers.go:580]     Audit-Id: f71cb608-2ce0-48c9-b8b8-12960eab1a19
	I0109 00:25:46.717940 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:46.717949 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:46.717956 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:46.717963 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:46.718101 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:46.718541 1747564 node_ready.go:58] node "multinode-979047-m02" has status "Ready":"False"
	I0109 00:25:47.214159 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:47.214181 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:47.214191 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:47.214201 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:47.216673 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:47.216694 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:47.216703 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:47.216710 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:47.216716 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:47.216723 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:47 GMT
	I0109 00:25:47.216729 1747564 round_trippers.go:580]     Audit-Id: 325e9255-2a07-4c58-a8c8-32126dce32d8
	I0109 00:25:47.216735 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:47.216856 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:47.713990 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:47.714016 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:47.714026 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:47.714034 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:47.716585 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:47.716606 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:47.716614 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:47.716620 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:47.716636 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:47.716643 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:47.716649 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:47 GMT
	I0109 00:25:47.716656 1747564 round_trippers.go:580]     Audit-Id: 53747d0b-39bf-47b5-85e7-eec486b66564
	I0109 00:25:47.716871 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:48.214589 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:48.214614 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:48.214624 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:48.214631 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:48.217149 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:48.217176 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:48.217185 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:48.217192 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:48.217198 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:48.217204 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:48 GMT
	I0109 00:25:48.217210 1747564 round_trippers.go:580]     Audit-Id: 23d2f047-a649-4e1d-9f43-8bd7952c06a2
	I0109 00:25:48.217216 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:48.217350 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:48.714526 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:48.714553 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:48.714563 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:48.714581 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:48.717102 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:48.717128 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:48.717136 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:48.717143 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:48.717150 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:48.717156 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:48 GMT
	I0109 00:25:48.717162 1747564 round_trippers.go:580]     Audit-Id: 57d2bbf9-db14-48bd-aec9-1fcfddb86106
	I0109 00:25:48.717169 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:48.717297 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:49.214081 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:49.214107 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:49.214117 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:49.214125 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:49.216634 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:49.216654 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:49.216662 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:49.216669 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:49 GMT
	I0109 00:25:49.216675 1747564 round_trippers.go:580]     Audit-Id: beaea4f7-d371-41ab-95fb-45d4abe87d9c
	I0109 00:25:49.216681 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:49.216687 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:49.216694 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:49.216940 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:49.217352 1747564 node_ready.go:58] node "multinode-979047-m02" has status "Ready":"False"
	I0109 00:25:49.714023 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:49.714047 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:49.714057 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:49.714064 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:49.716839 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:49.716863 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:49.716872 1747564 round_trippers.go:580]     Audit-Id: f2ebe031-335e-439c-8ba2-263b20349b34
	I0109 00:25:49.716879 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:49.716886 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:49.716892 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:49.716899 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:49.716905 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:49 GMT
	I0109 00:25:49.717094 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:50.214705 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:50.214730 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:50.214739 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:50.214747 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:50.217201 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:50.217222 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:50.217230 1747564 round_trippers.go:580]     Audit-Id: 4063e680-2ca0-4542-97d6-166b2f090fff
	I0109 00:25:50.217236 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:50.217242 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:50.217249 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:50.217255 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:50.217262 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:50 GMT
	I0109 00:25:50.217371 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:50.714498 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:50.714521 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:50.714532 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:50.714540 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:50.717219 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:50.717243 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:50.717253 1747564 round_trippers.go:580]     Audit-Id: 9167e93a-9a7d-4684-9d4d-5e0d896f4c71
	I0109 00:25:50.717259 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:50.717265 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:50.717271 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:50.717278 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:50.717284 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:50 GMT
	I0109 00:25:50.717426 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:51.214503 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:51.214531 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:51.214540 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:51.214547 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:51.217105 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:51.217129 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:51.217138 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:51.217151 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:51.217159 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:51.217166 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:51 GMT
	I0109 00:25:51.217173 1747564 round_trippers.go:580]     Audit-Id: 8b2ff8d6-de11-43f2-98f9-43d56378c7f1
	I0109 00:25:51.217179 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:51.217294 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:51.217704 1747564 node_ready.go:58] node "multinode-979047-m02" has status "Ready":"False"
	I0109 00:25:51.714922 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:51.714945 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:51.714955 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:51.714962 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:51.717435 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:51.717456 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:51.717464 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:51.717470 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:51.717476 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:51.717483 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:51.717493 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:51 GMT
	I0109 00:25:51.717499 1747564 round_trippers.go:580]     Audit-Id: a66c3ec6-97a8-4c4f-bd18-077fa4079227
	I0109 00:25:51.717639 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:52.214764 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:52.214784 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:52.214794 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:52.214801 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:52.217247 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:52.217282 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:52.217291 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:52.217297 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:52.217303 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:52.217309 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:52 GMT
	I0109 00:25:52.217316 1747564 round_trippers.go:580]     Audit-Id: 5fc84951-096a-4f70-b1a6-82cf62cdb80b
	I0109 00:25:52.217326 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:52.217469 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:52.714609 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:52.714639 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:52.714650 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:52.714657 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:52.716982 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:52.717011 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:52.717019 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:52.717027 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:52 GMT
	I0109 00:25:52.717033 1747564 round_trippers.go:580]     Audit-Id: 9f03f799-b459-44d3-bdeb-96a499899bc6
	I0109 00:25:52.717046 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:52.717053 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:52.717059 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:52.717274 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:53.214980 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:53.215006 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:53.215020 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:53.215027 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:53.217442 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:53.217501 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:53.217523 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:53.217545 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:53.217583 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:53 GMT
	I0109 00:25:53.217610 1747564 round_trippers.go:580]     Audit-Id: c9259918-5589-471f-a8bf-499a93eaceb2
	I0109 00:25:53.217624 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:53.217631 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:53.217780 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:53.218204 1747564 node_ready.go:58] node "multinode-979047-m02" has status "Ready":"False"
	I0109 00:25:53.714073 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:53.714097 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:53.714107 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:53.714114 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:53.716765 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:53.716788 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:53.716797 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:53.716804 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:53.716811 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:53.716820 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:53.716827 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:53 GMT
	I0109 00:25:53.716833 1747564 round_trippers.go:580]     Audit-Id: f43ccd57-2999-4bf4-93b6-bec5b071023e
	I0109 00:25:53.717226 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:54.214912 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:54.214937 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:54.214947 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:54.214954 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:54.217500 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:54.217521 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:54.217529 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:54 GMT
	I0109 00:25:54.217536 1747564 round_trippers.go:580]     Audit-Id: f094851c-b09e-44b8-b781-9ddf9b0c835f
	I0109 00:25:54.217542 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:54.217548 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:54.217554 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:54.217561 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:54.217701 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:54.714090 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:54.714113 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:54.714123 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:54.714130 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:54.716727 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:54.716754 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:54.716762 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:54.716769 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:54.716783 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:54.716790 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:54.716796 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:54 GMT
	I0109 00:25:54.716802 1747564 round_trippers.go:580]     Audit-Id: 682683c2-d085-48da-892e-66d984a0f416
	I0109 00:25:54.716938 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:55.213977 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:55.213999 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:55.214009 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:55.214016 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:55.216550 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:55.216574 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:55.216583 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:55.216590 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:55.216596 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:55 GMT
	I0109 00:25:55.216603 1747564 round_trippers.go:580]     Audit-Id: a5cef48d-6b8f-4088-917a-cbffaf23996c
	I0109 00:25:55.216609 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:55.216616 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:55.216753 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:55.714915 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:55.714939 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:55.714949 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:55.714956 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:55.717438 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:55.717460 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:55.717469 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:55.717475 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:55.717482 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:55.717489 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:55 GMT
	I0109 00:25:55.717495 1747564 round_trippers.go:580]     Audit-Id: f56a836b-55ac-4822-95d6-43a0c3616d06
	I0109 00:25:55.717501 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:55.717628 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:55.718059 1747564 node_ready.go:58] node "multinode-979047-m02" has status "Ready":"False"
	I0109 00:25:56.214834 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:56.214861 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:56.214872 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:56.214879 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:56.217322 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:56.217342 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:56.217350 1747564 round_trippers.go:580]     Audit-Id: 7ce17147-10d2-4a72-b512-0aee084575dc
	I0109 00:25:56.217359 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:56.217365 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:56.217371 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:56.217378 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:56.217384 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:56 GMT
	I0109 00:25:56.217519 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:56.714696 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:56.714721 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:56.714730 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:56.714738 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:56.717198 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:56.717219 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:56.717227 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:56.717234 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:56.717240 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:56 GMT
	I0109 00:25:56.717247 1747564 round_trippers.go:580]     Audit-Id: 426166d9-9dce-4a86-a57b-442b97bded3a
	I0109 00:25:56.717256 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:56.717263 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:56.717755 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:57.214707 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:57.214740 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:57.214750 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:57.214758 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:57.217306 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:57.217331 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:57.217340 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:57 GMT
	I0109 00:25:57.217346 1747564 round_trippers.go:580]     Audit-Id: 5a882f06-7077-4a65-bc43-31f657646d83
	I0109 00:25:57.217354 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:57.217360 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:57.217367 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:57.217377 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:57.217495 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:57.714778 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:57.714805 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:57.714816 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:57.714823 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:57.717402 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:57.717425 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:57.717433 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:57.717440 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:57.717446 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:57.717453 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:57 GMT
	I0109 00:25:57.717459 1747564 round_trippers.go:580]     Audit-Id: ed47b1aa-8ec8-42c1-9c01-779a2469503c
	I0109 00:25:57.717465 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:57.717582 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:58.214713 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:58.214738 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:58.214748 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:58.214755 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:58.217213 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:58.217236 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:58.217244 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:58.217251 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:58.217258 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:58 GMT
	I0109 00:25:58.217264 1747564 round_trippers.go:580]     Audit-Id: 7a17f80f-50f4-4b7b-861f-737af4e15492
	I0109 00:25:58.217272 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:58.217278 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:58.217412 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:58.217817 1747564 node_ready.go:58] node "multinode-979047-m02" has status "Ready":"False"
	I0109 00:25:58.714111 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:58.714136 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:58.714146 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:58.714153 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:58.716736 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:58.716759 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:58.716767 1747564 round_trippers.go:580]     Audit-Id: 5c26a12a-47f2-488b-9ac0-af1f44722ee5
	I0109 00:25:58.716773 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:58.716783 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:58.716791 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:58.716813 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:58.716824 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:58 GMT
	I0109 00:25:58.717007 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:59.214069 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:59.214093 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:59.214102 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:59.214109 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:59.216560 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:59.216582 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:59.216590 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:59 GMT
	I0109 00:25:59.216601 1747564 round_trippers.go:580]     Audit-Id: d1f2dbda-58b2-4715-838c-bc19436639f8
	I0109 00:25:59.216609 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:59.216615 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:59.216621 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:59.216627 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:59.216964 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:25:59.714073 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:25:59.714101 1747564 round_trippers.go:469] Request Headers:
	I0109 00:25:59.714112 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:25:59.714119 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:25:59.716651 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:25:59.716671 1747564 round_trippers.go:577] Response Headers:
	I0109 00:25:59.716680 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:25:59 GMT
	I0109 00:25:59.716686 1747564 round_trippers.go:580]     Audit-Id: e60af044-d3bd-4476-8aa8-f1075827bb22
	I0109 00:25:59.716693 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:25:59.716699 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:25:59.716705 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:25:59.716711 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:25:59.716832 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:26:00.214901 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:26:00.214933 1747564 round_trippers.go:469] Request Headers:
	I0109 00:26:00.214944 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:26:00.214951 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:26:00.217584 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:26:00.217615 1747564 round_trippers.go:577] Response Headers:
	I0109 00:26:00.217624 1747564 round_trippers.go:580]     Audit-Id: 6d57a7ae-e756-48af-8bb0-9c94fb836ed9
	I0109 00:26:00.217631 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:26:00.217638 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:26:00.217645 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:26:00.217651 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:26:00.217669 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:26:00 GMT
	I0109 00:26:00.217816 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:26:00.218253 1747564 node_ready.go:58] node "multinode-979047-m02" has status "Ready":"False"
	I0109 00:26:00.715008 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:26:00.715031 1747564 round_trippers.go:469] Request Headers:
	I0109 00:26:00.715041 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:26:00.715049 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:26:00.717664 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:26:00.717730 1747564 round_trippers.go:577] Response Headers:
	I0109 00:26:00.717754 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:26:00.717777 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:26:00.717813 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:26:00 GMT
	I0109 00:26:00.717835 1747564 round_trippers.go:580]     Audit-Id: 7f0d4e3a-98bb-4650-81f4-2500b421ba97
	I0109 00:26:00.717850 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:26:00.717861 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:26:00.718144 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:26:01.214860 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:26:01.214885 1747564 round_trippers.go:469] Request Headers:
	I0109 00:26:01.214895 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:26:01.214902 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:26:01.217436 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:26:01.217461 1747564 round_trippers.go:577] Response Headers:
	I0109 00:26:01.217470 1747564 round_trippers.go:580]     Audit-Id: f2ab1392-3fab-4d32-a6fa-c889810de7a7
	I0109 00:26:01.217477 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:26:01.217483 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:26:01.217490 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:26:01.217497 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:26:01.217506 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:26:01 GMT
	I0109 00:26:01.217878 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:26:01.714488 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:26:01.714520 1747564 round_trippers.go:469] Request Headers:
	I0109 00:26:01.714530 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:26:01.714537 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:26:01.717073 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:26:01.717093 1747564 round_trippers.go:577] Response Headers:
	I0109 00:26:01.717103 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:26:01.717109 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:26:01.717116 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:26:01 GMT
	I0109 00:26:01.717122 1747564 round_trippers.go:580]     Audit-Id: 5661a5a0-0d16-465b-b632-e9554a2cf7fb
	I0109 00:26:01.717128 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:26:01.717134 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:26:01.717267 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:26:02.214353 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:26:02.214375 1747564 round_trippers.go:469] Request Headers:
	I0109 00:26:02.214385 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:26:02.214393 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:26:02.217029 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:26:02.217054 1747564 round_trippers.go:577] Response Headers:
	I0109 00:26:02.217063 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:26:02.217070 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:26:02.217076 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:26:02 GMT
	I0109 00:26:02.217083 1747564 round_trippers.go:580]     Audit-Id: e3b740d1-0de3-4a47-825d-4cf6a62daeaf
	I0109 00:26:02.217090 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:26:02.217100 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:26:02.217373 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:26:02.714776 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:26:02.714802 1747564 round_trippers.go:469] Request Headers:
	I0109 00:26:02.714812 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:26:02.714820 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:26:02.717346 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:26:02.717364 1747564 round_trippers.go:577] Response Headers:
	I0109 00:26:02.717372 1747564 round_trippers.go:580]     Audit-Id: 9f161cc2-9c0a-48ce-af2d-bdde73f0d3b8
	I0109 00:26:02.717378 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:26:02.717384 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:26:02.717391 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:26:02.717397 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:26:02.717403 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:26:02 GMT
	I0109 00:26:02.718025 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:26:02.718481 1747564 node_ready.go:58] node "multinode-979047-m02" has status "Ready":"False"
	I0109 00:26:03.214716 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:26:03.214739 1747564 round_trippers.go:469] Request Headers:
	I0109 00:26:03.214749 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:26:03.214756 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:26:03.218373 1747564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:26:03.218396 1747564 round_trippers.go:577] Response Headers:
	I0109 00:26:03.218407 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:26:03.218413 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:26:03.218419 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:26:03.218425 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:26:03.218431 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:26:03 GMT
	I0109 00:26:03.218456 1747564 round_trippers.go:580]     Audit-Id: 2f877641-beb5-4098-9433-7fcc3e78d21c
	I0109 00:26:03.218625 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:26:03.714198 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:26:03.714225 1747564 round_trippers.go:469] Request Headers:
	I0109 00:26:03.714236 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:26:03.714243 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:26:03.716857 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:26:03.716882 1747564 round_trippers.go:577] Response Headers:
	I0109 00:26:03.716891 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:26:03.716898 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:26:03 GMT
	I0109 00:26:03.716912 1747564 round_trippers.go:580]     Audit-Id: c17a8beb-c058-4e73-8f63-d0b1a8c038a2
	I0109 00:26:03.716918 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:26:03.716924 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:26:03.716938 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:26:03.717050 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"516","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0109 00:26:04.214136 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:26:04.214165 1747564 round_trippers.go:469] Request Headers:
	I0109 00:26:04.214175 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:26:04.214182 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:26:04.216926 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:26:04.216947 1747564 round_trippers.go:577] Response Headers:
	I0109 00:26:04.216955 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:26:04.216961 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:26:04.216967 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:26:04 GMT
	I0109 00:26:04.216973 1747564 round_trippers.go:580]     Audit-Id: 370ed589-d2e1-4bfd-935d-7a3bcc2dfc4e
	I0109 00:26:04.217069 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:26:04.217084 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:26:04.217290 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"540","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5930 chars]
	I0109 00:26:04.217696 1747564 node_ready.go:49] node "multinode-979047-m02" has status "Ready":"True"
	I0109 00:26:04.217744 1747564 node_ready.go:38] duration metric: took 30.503868337s waiting for node "multinode-979047-m02" to be "Ready" ...
	I0109 00:26:04.217754 1747564 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:26:04.217825 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0109 00:26:04.217835 1747564 round_trippers.go:469] Request Headers:
	I0109 00:26:04.217843 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:26:04.217849 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:26:04.221646 1747564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:26:04.221674 1747564 round_trippers.go:577] Response Headers:
	I0109 00:26:04.221682 1747564 round_trippers.go:580]     Audit-Id: de8f05bf-41ac-4201-aa16-1766c0f244f5
	I0109 00:26:04.221689 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:26:04.221695 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:26:04.221702 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:26:04.221708 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:26:04.221714 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:26:04 GMT
	I0109 00:26:04.222476 1747564 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"540"},"items":[{"metadata":{"name":"coredns-5dd5756b68-shbhd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"46759197-0373-4f95-ba9c-8065624d0f27","resourceVersion":"443","creationTimestamp":"2024-01-09T00:25:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"eed4b8ed-33af-4fa3-b93c-c7a4f038d6a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"eed4b8ed-33af-4fa3-b93c-c7a4f038d6a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I0109 00:26:04.225587 1747564 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-shbhd" in "kube-system" namespace to be "Ready" ...
	I0109 00:26:04.225682 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-shbhd
	I0109 00:26:04.225693 1747564 round_trippers.go:469] Request Headers:
	I0109 00:26:04.225702 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:26:04.225709 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:26:04.228124 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:26:04.228147 1747564 round_trippers.go:577] Response Headers:
	I0109 00:26:04.228176 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:26:04.228186 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:26:04 GMT
	I0109 00:26:04.228198 1747564 round_trippers.go:580]     Audit-Id: 9e582cf7-63e7-4675-90dd-36f7568c2edd
	I0109 00:26:04.228212 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:26:04.228218 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:26:04.228229 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:26:04.228369 1747564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-shbhd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"46759197-0373-4f95-ba9c-8065624d0f27","resourceVersion":"443","creationTimestamp":"2024-01-09T00:25:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"eed4b8ed-33af-4fa3-b93c-c7a4f038d6a2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"eed4b8ed-33af-4fa3-b93c-c7a4f038d6a2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0109 00:26:04.228935 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:26:04.228953 1747564 round_trippers.go:469] Request Headers:
	I0109 00:26:04.228961 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:26:04.228968 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:26:04.232198 1747564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0109 00:26:04.232244 1747564 round_trippers.go:577] Response Headers:
	I0109 00:26:04.232292 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:26:04.232305 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:26:04.232314 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:26:04.232321 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:26:04.232327 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:26:04 GMT
	I0109 00:26:04.232333 1747564 round_trippers.go:580]     Audit-Id: 8f60f0f3-7e7e-4963-800c-d314997677d2
	I0109 00:26:04.232438 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"427","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0109 00:26:04.232831 1747564 pod_ready.go:92] pod "coredns-5dd5756b68-shbhd" in "kube-system" namespace has status "Ready":"True"
	I0109 00:26:04.232849 1747564 pod_ready.go:81] duration metric: took 7.235132ms waiting for pod "coredns-5dd5756b68-shbhd" in "kube-system" namespace to be "Ready" ...
	I0109 00:26:04.232860 1747564 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-979047" in "kube-system" namespace to be "Ready" ...
	I0109 00:26:04.232921 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-979047
	I0109 00:26:04.232930 1747564 round_trippers.go:469] Request Headers:
	I0109 00:26:04.232938 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:26:04.232945 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:26:04.235199 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:26:04.235217 1747564 round_trippers.go:577] Response Headers:
	I0109 00:26:04.235241 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:26:04 GMT
	I0109 00:26:04.235248 1747564 round_trippers.go:580]     Audit-Id: da0f9d1e-15a6-4553-8cfa-7e847b22678a
	I0109 00:26:04.235256 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:26:04.235262 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:26:04.235268 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:26:04.235274 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:26:04.235367 1747564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-979047","namespace":"kube-system","uid":"a5a13277-0ebc-493c-a6d6-f46ae712ddb9","resourceVersion":"453","creationTimestamp":"2024-01-09T00:24:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"0cec3581efd3ceb60c1df7924ae017cf","kubernetes.io/config.mirror":"0cec3581efd3ceb60c1df7924ae017cf","kubernetes.io/config.seen":"2024-01-09T00:24:54.513907754Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:24:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0109 00:26:04.235825 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:26:04.235842 1747564 round_trippers.go:469] Request Headers:
	I0109 00:26:04.235849 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:26:04.235857 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:26:04.238111 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:26:04.238171 1747564 round_trippers.go:577] Response Headers:
	I0109 00:26:04.238195 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:26:04.238209 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:26:04.238215 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:26:04.238222 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:26:04 GMT
	I0109 00:26:04.238228 1747564 round_trippers.go:580]     Audit-Id: e9082d1f-5459-4473-a36f-b9ea4d7aea67
	I0109 00:26:04.238263 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:26:04.238400 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"427","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0109 00:26:04.238831 1747564 pod_ready.go:92] pod "etcd-multinode-979047" in "kube-system" namespace has status "Ready":"True"
	I0109 00:26:04.238852 1747564 pod_ready.go:81] duration metric: took 5.982001ms waiting for pod "etcd-multinode-979047" in "kube-system" namespace to be "Ready" ...
	I0109 00:26:04.238874 1747564 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-979047" in "kube-system" namespace to be "Ready" ...
	I0109 00:26:04.238940 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-979047
	I0109 00:26:04.238948 1747564 round_trippers.go:469] Request Headers:
	I0109 00:26:04.238956 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:26:04.238963 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:26:04.241339 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:26:04.241393 1747564 round_trippers.go:577] Response Headers:
	I0109 00:26:04.241408 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:26:04.241416 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:26:04 GMT
	I0109 00:26:04.241423 1747564 round_trippers.go:580]     Audit-Id: 970923a0-c02f-4e79-b2b4-0dcb08f94be1
	I0109 00:26:04.241429 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:26:04.241435 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:26:04.241441 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:26:04.241586 1747564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-979047","namespace":"kube-system","uid":"38619ccd-6ea3-42d0-8b26-b59a1af5875d","resourceVersion":"452","creationTimestamp":"2024-01-09T00:24:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"efcc78c31982772633c5559a7765d574","kubernetes.io/config.mirror":"efcc78c31982772633c5559a7765d574","kubernetes.io/config.seen":"2024-01-09T00:24:54.513911972Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:24:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0109 00:26:04.242163 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:26:04.242180 1747564 round_trippers.go:469] Request Headers:
	I0109 00:26:04.242188 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:26:04.242210 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:26:04.244567 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:26:04.244628 1747564 round_trippers.go:577] Response Headers:
	I0109 00:26:04.244651 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:26:04.244665 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:26:04.244672 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:26:04 GMT
	I0109 00:26:04.244678 1747564 round_trippers.go:580]     Audit-Id: 43ca04ac-8912-497c-a3d5-ea936d3e15b0
	I0109 00:26:04.244699 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:26:04.244713 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:26:04.244813 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"427","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0109 00:26:04.245224 1747564 pod_ready.go:92] pod "kube-apiserver-multinode-979047" in "kube-system" namespace has status "Ready":"True"
	I0109 00:26:04.245240 1747564 pod_ready.go:81] duration metric: took 6.355461ms waiting for pod "kube-apiserver-multinode-979047" in "kube-system" namespace to be "Ready" ...
	I0109 00:26:04.245251 1747564 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-979047" in "kube-system" namespace to be "Ready" ...
	I0109 00:26:04.245318 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-979047
	I0109 00:26:04.245329 1747564 round_trippers.go:469] Request Headers:
	I0109 00:26:04.245337 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:26:04.245344 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:26:04.247872 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:26:04.247895 1747564 round_trippers.go:577] Response Headers:
	I0109 00:26:04.247904 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:26:04 GMT
	I0109 00:26:04.247910 1747564 round_trippers.go:580]     Audit-Id: 635b44d9-242a-4308-8d05-edb9c36f115e
	I0109 00:26:04.247916 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:26:04.247922 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:26:04.247928 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:26:04.247935 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:26:04.248068 1747564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-979047","namespace":"kube-system","uid":"cd5437df-a3ac-4591-8cce-765486ff6afb","resourceVersion":"454","creationTimestamp":"2024-01-09T00:24:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4ec0a5968f98276fe45449a372f72485","kubernetes.io/config.mirror":"4ec0a5968f98276fe45449a372f72485","kubernetes.io/config.seen":"2024-01-09T00:24:47.067515085Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:24:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0109 00:26:04.248590 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:26:04.248605 1747564 round_trippers.go:469] Request Headers:
	I0109 00:26:04.248613 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:26:04.248620 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:26:04.250944 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:26:04.250997 1747564 round_trippers.go:577] Response Headers:
	I0109 00:26:04.251019 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:26:04 GMT
	I0109 00:26:04.251041 1747564 round_trippers.go:580]     Audit-Id: 04e0c3f1-e56b-4d32-825d-8c32ee7d0b41
	I0109 00:26:04.251076 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:26:04.251098 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:26:04.251109 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:26:04.251116 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:26:04.251226 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"427","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0109 00:26:04.251613 1747564 pod_ready.go:92] pod "kube-controller-manager-multinode-979047" in "kube-system" namespace has status "Ready":"True"
	I0109 00:26:04.251634 1747564 pod_ready.go:81] duration metric: took 6.37142ms waiting for pod "kube-controller-manager-multinode-979047" in "kube-system" namespace to be "Ready" ...
	I0109 00:26:04.251653 1747564 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r5w9b" in "kube-system" namespace to be "Ready" ...
	I0109 00:26:04.415022 1747564 request.go:629] Waited for 163.300308ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r5w9b
	I0109 00:26:04.415088 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r5w9b
	I0109 00:26:04.415100 1747564 round_trippers.go:469] Request Headers:
	I0109 00:26:04.415110 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:26:04.415118 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:26:04.417630 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:26:04.417672 1747564 round_trippers.go:577] Response Headers:
	I0109 00:26:04.417682 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:26:04.417688 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:26:04.417698 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:26:04.417706 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:26:04.417716 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:26:04 GMT
	I0109 00:26:04.417722 1747564 round_trippers.go:580]     Audit-Id: 5369a000-8546-4d46-a3e1-011980c1a272
	I0109 00:26:04.417870 1747564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-r5w9b","generateName":"kube-proxy-","namespace":"kube-system","uid":"0b49bb1e-f3f4-4760-bb78-97d8bc5ae4e6","resourceVersion":"423","creationTimestamp":"2024-01-09T00:25:07Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"920e93f8-1b6d-4b70-a3ad-394be18be16a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"920e93f8-1b6d-4b70-a3ad-394be18be16a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0109 00:26:04.614701 1747564 request.go:629] Waited for 196.343462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:26:04.614785 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:26:04.614794 1747564 round_trippers.go:469] Request Headers:
	I0109 00:26:04.614803 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:26:04.614813 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:26:04.617329 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:26:04.617348 1747564 round_trippers.go:577] Response Headers:
	I0109 00:26:04.617356 1747564 round_trippers.go:580]     Audit-Id: 2d103345-662f-4c65-8e90-dd48a489d90a
	I0109 00:26:04.617365 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:26:04.617389 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:26:04.617402 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:26:04.617409 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:26:04.617415 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:26:04 GMT
	I0109 00:26:04.617775 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"427","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0109 00:26:04.618212 1747564 pod_ready.go:92] pod "kube-proxy-r5w9b" in "kube-system" namespace has status "Ready":"True"
	I0109 00:26:04.618231 1747564 pod_ready.go:81] duration metric: took 366.567958ms waiting for pod "kube-proxy-r5w9b" in "kube-system" namespace to be "Ready" ...
	I0109 00:26:04.618256 1747564 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v2s5z" in "kube-system" namespace to be "Ready" ...
	I0109 00:26:04.815094 1747564 request.go:629] Waited for 196.765193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v2s5z
	I0109 00:26:04.815174 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v2s5z
	I0109 00:26:04.815185 1747564 round_trippers.go:469] Request Headers:
	I0109 00:26:04.815200 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:26:04.815208 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:26:04.817725 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:26:04.817787 1747564 round_trippers.go:577] Response Headers:
	I0109 00:26:04.817810 1747564 round_trippers.go:580]     Audit-Id: 058ac796-af22-42f9-8db7-8405d8abac0f
	I0109 00:26:04.817833 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:26:04.817870 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:26:04.817895 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:26:04.817909 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:26:04.817916 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:26:04 GMT
	I0109 00:26:04.818051 1747564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-v2s5z","generateName":"kube-proxy-","namespace":"kube-system","uid":"8404e673-7681-4a21-bead-4724ccb060bc","resourceVersion":"504","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"920e93f8-1b6d-4b70-a3ad-394be18be16a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"920e93f8-1b6d-4b70-a3ad-394be18be16a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0109 00:26:05.015030 1747564 request.go:629] Waited for 196.432898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:26:05.015120 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047-m02
	I0109 00:26:05.015135 1747564 round_trippers.go:469] Request Headers:
	I0109 00:26:05.015144 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:26:05.015154 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:26:05.018011 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:26:05.018038 1747564 round_trippers.go:577] Response Headers:
	I0109 00:26:05.018047 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:26:05 GMT
	I0109 00:26:05.018053 1747564 round_trippers.go:580]     Audit-Id: f59c1c3c-8bb8-43c7-a3a0-4b2f37719969
	I0109 00:26:05.018072 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:26:05.018080 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:26:05.018086 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:26:05.018093 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:26:05.018202 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047-m02","uid":"cb9ac37c-021d-49d9-bd4f-947cba9ffead","resourceVersion":"540","creationTimestamp":"2024-01-09T00:25:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_09T00_25_33_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:25:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5930 chars]
	I0109 00:26:05.018633 1747564 pod_ready.go:92] pod "kube-proxy-v2s5z" in "kube-system" namespace has status "Ready":"True"
	I0109 00:26:05.018655 1747564 pod_ready.go:81] duration metric: took 400.384797ms waiting for pod "kube-proxy-v2s5z" in "kube-system" namespace to be "Ready" ...
	I0109 00:26:05.018673 1747564 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-979047" in "kube-system" namespace to be "Ready" ...
	I0109 00:26:05.214386 1747564 request.go:629] Waited for 195.637823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-979047
	I0109 00:26:05.214463 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-979047
	I0109 00:26:05.214491 1747564 round_trippers.go:469] Request Headers:
	I0109 00:26:05.214504 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:26:05.214512 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:26:05.216998 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:26:05.217059 1747564 round_trippers.go:577] Response Headers:
	I0109 00:26:05.217071 1747564 round_trippers.go:580]     Audit-Id: 8403325b-fa0e-47ad-9eae-d20e730907cd
	I0109 00:26:05.217078 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:26:05.217087 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:26:05.217100 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:26:05.217106 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:26:05.217115 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:26:05 GMT
	I0109 00:26:05.217232 1747564 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-979047","namespace":"kube-system","uid":"332540fe-3c27-468f-9108-453e0086f012","resourceVersion":"451","creationTimestamp":"2024-01-09T00:24:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b7338a293703ada6fed293fc7aaddf4d","kubernetes.io/config.mirror":"b7338a293703ada6fed293fc7aaddf4d","kubernetes.io/config.seen":"2024-01-09T00:24:54.513914901Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-09T00:24:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0109 00:26:05.415036 1747564 request.go:629] Waited for 197.351019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:26:05.415157 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-979047
	I0109 00:26:05.415171 1747564 round_trippers.go:469] Request Headers:
	I0109 00:26:05.415180 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:26:05.415188 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:26:05.417811 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:26:05.417838 1747564 round_trippers.go:577] Response Headers:
	I0109 00:26:05.417848 1747564 round_trippers.go:580]     Audit-Id: 44ad678f-7048-4349-bacb-8e6b03f4b77e
	I0109 00:26:05.417854 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:26:05.417860 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:26:05.417866 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:26:05.417873 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:26:05.417879 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:26:05 GMT
	I0109 00:26:05.417998 1747564 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"427","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-09T00:24:51Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0109 00:26:05.418406 1747564 pod_ready.go:92] pod "kube-scheduler-multinode-979047" in "kube-system" namespace has status "Ready":"True"
	I0109 00:26:05.418428 1747564 pod_ready.go:81] duration metric: took 399.742649ms waiting for pod "kube-scheduler-multinode-979047" in "kube-system" namespace to be "Ready" ...
	I0109 00:26:05.418463 1747564 pod_ready.go:38] duration metric: took 1.200693586s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0109 00:26:05.418483 1747564 system_svc.go:44] waiting for kubelet service to be running ....
	I0109 00:26:05.418541 1747564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:26:05.432588 1747564 system_svc.go:56] duration metric: took 14.097249ms WaitForService to wait for kubelet.
	I0109 00:26:05.432612 1747564 kubeadm.go:581] duration metric: took 31.744920931s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0109 00:26:05.432634 1747564 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:26:05.615015 1747564 request.go:629] Waited for 182.309795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0109 00:26:05.615095 1747564 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0109 00:26:05.615106 1747564 round_trippers.go:469] Request Headers:
	I0109 00:26:05.615116 1747564 round_trippers.go:473]     Accept: application/json, */*
	I0109 00:26:05.615128 1747564 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0109 00:26:05.617714 1747564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0109 00:26:05.617733 1747564 round_trippers.go:577] Response Headers:
	I0109 00:26:05.617742 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ab5afe39-1a2e-4c0a-808e-6147fe73e525
	I0109 00:26:05.617748 1747564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 95fdf47b-223c-4eb6-a41b-30b44ade375a
	I0109 00:26:05.617755 1747564 round_trippers.go:580]     Date: Tue, 09 Jan 2024 00:26:05 GMT
	I0109 00:26:05.617761 1747564 round_trippers.go:580]     Audit-Id: b4b5b635-b196-4c4c-949a-c76e5d790a00
	I0109 00:26:05.617767 1747564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0109 00:26:05.617787 1747564 round_trippers.go:580]     Content-Type: application/json
	I0109 00:26:05.617968 1747564 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"541"},"items":[{"metadata":{"name":"multinode-979047","uid":"225848e2-4f2a-49f7-a1f4-c7468a250f39","resourceVersion":"427","creationTimestamp":"2024-01-09T00:24:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-979047","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a","minikube.k8s.io/name":"multinode-979047","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_09T00_24_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 13004 chars]
	I0109 00:26:05.618661 1747564 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0109 00:26:05.618682 1747564 node_conditions.go:123] node cpu capacity is 2
	I0109 00:26:05.618692 1747564 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0109 00:26:05.618697 1747564 node_conditions.go:123] node cpu capacity is 2
	I0109 00:26:05.618708 1747564 node_conditions.go:105] duration metric: took 186.066694ms to run NodePressure ...
	I0109 00:26:05.618724 1747564 start.go:228] waiting for startup goroutines ...
	I0109 00:26:05.618751 1747564 start.go:242] writing updated cluster config ...
	I0109 00:26:05.619070 1747564 ssh_runner.go:195] Run: rm -f paused
	I0109 00:26:05.681466 1747564 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0109 00:26:05.685255 1747564 out.go:177] * Done! kubectl is now configured to use "multinode-979047" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 09 00:25:10 multinode-979047 crio[893]: time="2024-01-09 00:25:10.805896151Z" level=info msg="Starting container: 1f15e3afeb2c8a6a9d2eab1037a344b3c89c528b17d2b2cde2fde2251385ee68" id=5de48ad3-d958-40df-bf3a-3337903da549 name=/runtime.v1.RuntimeService/StartContainer
	Jan 09 00:25:10 multinode-979047 crio[893]: time="2024-01-09 00:25:10.806540071Z" level=info msg="Created container 77a9783e675a332cca34558be944571a44b732c46f504839969353acb2a0be56: kube-system/storage-provisioner/storage-provisioner" id=eaa5e79f-e47f-47c4-a819-22817475f962 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 09 00:25:10 multinode-979047 crio[893]: time="2024-01-09 00:25:10.807037667Z" level=info msg="Starting container: 77a9783e675a332cca34558be944571a44b732c46f504839969353acb2a0be56" id=b34b37aa-f5c6-4c49-ac05-b60cd312d1bd name=/runtime.v1.RuntimeService/StartContainer
	Jan 09 00:25:10 multinode-979047 crio[893]: time="2024-01-09 00:25:10.819075048Z" level=info msg="Started container" PID=1919 containerID=1f15e3afeb2c8a6a9d2eab1037a344b3c89c528b17d2b2cde2fde2251385ee68 description=kube-system/coredns-5dd5756b68-shbhd/coredns id=5de48ad3-d958-40df-bf3a-3337903da549 name=/runtime.v1.RuntimeService/StartContainer sandboxID=77cf86082128a9ac0ced40b39aeb40d66776076b1f3894dde1d36e5e81c6f076
	Jan 09 00:25:10 multinode-979047 crio[893]: time="2024-01-09 00:25:10.823456297Z" level=info msg="Started container" PID=1923 containerID=77a9783e675a332cca34558be944571a44b732c46f504839969353acb2a0be56 description=kube-system/storage-provisioner/storage-provisioner id=b34b37aa-f5c6-4c49-ac05-b60cd312d1bd name=/runtime.v1.RuntimeService/StartContainer sandboxID=8a8c41083ce898a3ca355f86b125e927cc22acd65f25730c56a8dfb8a5749006
	Jan 09 00:26:06 multinode-979047 crio[893]: time="2024-01-09 00:26:06.945778455Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-bxf99/POD" id=b6c84b9a-17f3-4211-9b20-1889ad5e5335 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 09 00:26:06 multinode-979047 crio[893]: time="2024-01-09 00:26:06.945850423Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 09 00:26:06 multinode-979047 crio[893]: time="2024-01-09 00:26:06.971105587Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-bxf99 Namespace:default ID:5d89837032d52d2e96170c64d643ddeacb42e77d9db4dd0f4b9a197de8507943 UID:1b733dbe-e070-42ab-ad52-ad8256f0cdd5 NetNS:/var/run/netns/f473a533-287f-4c78-b4a1-1f82265569d9 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 09 00:26:06 multinode-979047 crio[893]: time="2024-01-09 00:26:06.971161243Z" level=info msg="Adding pod default_busybox-5bc68d56bd-bxf99 to CNI network \"kindnet\" (type=ptp)"
	Jan 09 00:26:06 multinode-979047 crio[893]: time="2024-01-09 00:26:06.981686685Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-bxf99 Namespace:default ID:5d89837032d52d2e96170c64d643ddeacb42e77d9db4dd0f4b9a197de8507943 UID:1b733dbe-e070-42ab-ad52-ad8256f0cdd5 NetNS:/var/run/netns/f473a533-287f-4c78-b4a1-1f82265569d9 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 09 00:26:06 multinode-979047 crio[893]: time="2024-01-09 00:26:06.981839753Z" level=info msg="Checking pod default_busybox-5bc68d56bd-bxf99 for CNI network kindnet (type=ptp)"
	Jan 09 00:26:06 multinode-979047 crio[893]: time="2024-01-09 00:26:06.985827251Z" level=info msg="Ran pod sandbox 5d89837032d52d2e96170c64d643ddeacb42e77d9db4dd0f4b9a197de8507943 with infra container: default/busybox-5bc68d56bd-bxf99/POD" id=b6c84b9a-17f3-4211-9b20-1889ad5e5335 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 09 00:26:06 multinode-979047 crio[893]: time="2024-01-09 00:26:06.987086921Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=de7386da-aba8-44db-9410-755caa1969a9 name=/runtime.v1.ImageService/ImageStatus
	Jan 09 00:26:06 multinode-979047 crio[893]: time="2024-01-09 00:26:06.987417033Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=de7386da-aba8-44db-9410-755caa1969a9 name=/runtime.v1.ImageService/ImageStatus
	Jan 09 00:26:06 multinode-979047 crio[893]: time="2024-01-09 00:26:06.988113958Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=d643f688-c265-4192-8cc9-b9de675aa2be name=/runtime.v1.ImageService/PullImage
	Jan 09 00:26:06 multinode-979047 crio[893]: time="2024-01-09 00:26:06.989092338Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 09 00:26:07 multinode-979047 crio[893]: time="2024-01-09 00:26:07.587450165Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 09 00:26:08 multinode-979047 crio[893]: time="2024-01-09 00:26:08.688401852Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=d643f688-c265-4192-8cc9-b9de675aa2be name=/runtime.v1.ImageService/PullImage
	Jan 09 00:26:08 multinode-979047 crio[893]: time="2024-01-09 00:26:08.689764014Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=009fa370-664d-413a-ade8-98cda933a8ca name=/runtime.v1.ImageService/ImageStatus
	Jan 09 00:26:08 multinode-979047 crio[893]: time="2024-01-09 00:26:08.690428717Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=009fa370-664d-413a-ade8-98cda933a8ca name=/runtime.v1.ImageService/ImageStatus
	Jan 09 00:26:08 multinode-979047 crio[893]: time="2024-01-09 00:26:08.691295047Z" level=info msg="Creating container: default/busybox-5bc68d56bd-bxf99/busybox" id=6ba1ce67-7289-491e-8f5d-e3ea23f26ea9 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 09 00:26:08 multinode-979047 crio[893]: time="2024-01-09 00:26:08.691386560Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 09 00:26:08 multinode-979047 crio[893]: time="2024-01-09 00:26:08.755439931Z" level=info msg="Created container 6a549dd3dd56d252cf9d67df3a15c10c9bcc1454a1a702eb5d58b2fd14124bd5: default/busybox-5bc68d56bd-bxf99/busybox" id=6ba1ce67-7289-491e-8f5d-e3ea23f26ea9 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 09 00:26:08 multinode-979047 crio[893]: time="2024-01-09 00:26:08.756199930Z" level=info msg="Starting container: 6a549dd3dd56d252cf9d67df3a15c10c9bcc1454a1a702eb5d58b2fd14124bd5" id=50368423-4501-4435-8149-ebf55e05bd37 name=/runtime.v1.RuntimeService/StartContainer
	Jan 09 00:26:08 multinode-979047 crio[893]: time="2024-01-09 00:26:08.767308983Z" level=info msg="Started container" PID=2066 containerID=6a549dd3dd56d252cf9d67df3a15c10c9bcc1454a1a702eb5d58b2fd14124bd5 description=default/busybox-5bc68d56bd-bxf99/busybox id=50368423-4501-4435-8149-ebf55e05bd37 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5d89837032d52d2e96170c64d643ddeacb42e77d9db4dd0f4b9a197de8507943
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6a549dd3dd56d       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago        Running             busybox                   0                   5d89837032d52       busybox-5bc68d56bd-bxf99
	1f15e3afeb2c8       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      About a minute ago   Running             coredns                   0                   77cf86082128a       coredns-5dd5756b68-shbhd
	77a9783e675a3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      About a minute ago   Running             storage-provisioner       0                   8a8c41083ce89       storage-provisioner
	51d7e406f4306       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                      About a minute ago   Running             kindnet-cni               0                   17a4ea8c79300       kindnet-b4fpt
	aa483c1cd9a04       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                      About a minute ago   Running             kube-proxy                0                   4ca9803033884       kube-proxy-r5w9b
	fdf3ccdca2e33       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                      About a minute ago   Running             kube-scheduler            0                   e6dd77db22787       kube-scheduler-multinode-979047
	aba17a1c7ee36       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                      About a minute ago   Running             kube-apiserver            0                   cb1cc097ed653       kube-apiserver-multinode-979047
	079bc57587fb0       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                      About a minute ago   Running             kube-controller-manager   0                   1c012cf784868       kube-controller-manager-multinode-979047
	aaa56d80b0c69       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   4d17eac484d99       etcd-multinode-979047
	
	
	==> coredns [1f15e3afeb2c8a6a9d2eab1037a344b3c89c528b17d2b2cde2fde2251385ee68] <==
	[INFO] 10.244.1.2:58515 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000163661s
	[INFO] 10.244.0.3:42755 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132054s
	[INFO] 10.244.0.3:39809 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001256709s
	[INFO] 10.244.0.3:33374 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000083258s
	[INFO] 10.244.0.3:51420 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000047049s
	[INFO] 10.244.0.3:46397 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001170037s
	[INFO] 10.244.0.3:44354 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000057149s
	[INFO] 10.244.0.3:42618 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056608s
	[INFO] 10.244.0.3:44024 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072788s
	[INFO] 10.244.1.2:43538 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102746s
	[INFO] 10.244.1.2:48982 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067323s
	[INFO] 10.244.1.2:47034 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064s
	[INFO] 10.244.1.2:50072 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068686s
	[INFO] 10.244.0.3:55581 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112888s
	[INFO] 10.244.0.3:51571 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067874s
	[INFO] 10.244.0.3:51197 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056542s
	[INFO] 10.244.0.3:49279 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100596s
	[INFO] 10.244.1.2:54197 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114463s
	[INFO] 10.244.1.2:60438 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00013185s
	[INFO] 10.244.1.2:35425 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000212811s
	[INFO] 10.244.1.2:55434 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109547s
	[INFO] 10.244.0.3:41535 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090257s
	[INFO] 10.244.0.3:49346 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000056993s
	[INFO] 10.244.0.3:55898 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000064041s
	[INFO] 10.244.0.3:46800 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000055713s
	
	
	==> describe nodes <==
	Name:               multinode-979047
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-979047
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=multinode-979047
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_09T00_24_55_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jan 2024 00:24:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-979047
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jan 2024 00:26:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jan 2024 00:25:10 +0000   Tue, 09 Jan 2024 00:24:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jan 2024 00:25:10 +0000   Tue, 09 Jan 2024 00:24:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jan 2024 00:25:10 +0000   Tue, 09 Jan 2024 00:24:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jan 2024 00:25:10 +0000   Tue, 09 Jan 2024 00:25:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-979047
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 1fe4592c1b314c8ba2635fc2398c6d83
	  System UUID:                2e615a28-0951-4b51-ad21-2a6f9632ed74
	  Boot ID:                    9a753e90-64b1-452a-8e10-9b878947801f
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-bxf99                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5dd5756b68-shbhd                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     67s
	  kube-system                 etcd-multinode-979047                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         80s
	  kube-system                 kindnet-b4fpt                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      67s
	  kube-system                 kube-apiserver-multinode-979047             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-controller-manager-multinode-979047    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-proxy-r5w9b                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kube-scheduler-multinode-979047             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 64s                kube-proxy       
	  Normal  Starting                 87s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  87s (x8 over 87s)  kubelet          Node multinode-979047 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s (x8 over 87s)  kubelet          Node multinode-979047 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s (x8 over 87s)  kubelet          Node multinode-979047 status is now: NodeHasSufficientPID
	  Normal  Starting                 80s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  80s                kubelet          Node multinode-979047 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    80s                kubelet          Node multinode-979047 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     80s                kubelet          Node multinode-979047 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           68s                node-controller  Node multinode-979047 event: Registered Node multinode-979047 in Controller
	  Normal  NodeReady                64s                kubelet          Node multinode-979047 status is now: NodeReady
	
	
	Name:               multinode-979047-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-979047-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=multinode-979047
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_09T00_25_33_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jan 2024 00:25:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-979047-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jan 2024 00:26:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jan 2024 00:26:04 +0000   Tue, 09 Jan 2024 00:25:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jan 2024 00:26:04 +0000   Tue, 09 Jan 2024 00:25:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jan 2024 00:26:04 +0000   Tue, 09 Jan 2024 00:25:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 09 Jan 2024 00:26:04 +0000   Tue, 09 Jan 2024 00:26:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-979047-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 5f315470e79349dd880bc8ad442c60c8
	  System UUID:                c477c551-b449-46c4-85e7-f116de26c497
	  Boot ID:                    9a753e90-64b1-452a-8e10-9b878947801f
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-4v5vc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-hz4tb               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      42s
	  kube-system                 kube-proxy-v2s5z            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 40s                kube-proxy       
	  Normal  NodeHasSufficientMemory  42s (x5 over 44s)  kubelet          Node multinode-979047-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s (x5 over 44s)  kubelet          Node multinode-979047-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s (x5 over 44s)  kubelet          Node multinode-979047-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                node-controller  Node multinode-979047-m02 event: Registered Node multinode-979047-m02 in Controller
	  Normal  NodeReady                10s                kubelet          Node multinode-979047-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001079] FS-Cache: O-key=[8] '2f76ed0000000000'
	[  +0.000720] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001023] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=0000000009be8c6c
	[  +0.001112] FS-Cache: N-key=[8] '2f76ed0000000000'
	[  +0.010607] FS-Cache: Duplicate cookie detected
	[  +0.000806] FS-Cache: O-cookie c=00000060 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001106] FS-Cache: O-cookie d=000000001df03bef{9p.inode} n=00000000b45aa7e6
	[  +0.001139] FS-Cache: O-key=[8] '2f76ed0000000000'
	[  +0.000750] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001054] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=00000000e9f33a46
	[  +0.001189] FS-Cache: N-key=[8] '2f76ed0000000000'
	[  +2.185619] FS-Cache: Duplicate cookie detected
	[  +0.000751] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.001094] FS-Cache: O-cookie d=000000001df03bef{9p.inode} n=00000000ed3c59a1
	[  +0.001046] FS-Cache: O-key=[8] '2e76ed0000000000'
	[  +0.000727] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000928] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=00000000f2938397
	[  +0.001085] FS-Cache: N-key=[8] '2e76ed0000000000'
	[  +0.397498] FS-Cache: Duplicate cookie detected
	[  +0.000731] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001010] FS-Cache: O-cookie d=000000001df03bef{9p.inode} n=000000005629db1e
	[  +0.001140] FS-Cache: O-key=[8] '3476ed0000000000'
	[  +0.000717] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.000990] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=0000000009be8c6c
	[  +0.001266] FS-Cache: N-key=[8] '3476ed0000000000'
	
	
	==> etcd [aaa56d80b0c694ffb6a6b83b9bcd0bb2d3fe1d987df77ece00f4182e6daa03bd] <==
	{"level":"info","ts":"2024-01-09T00:24:47.819001Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-09T00:24:47.819195Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-09T00:24:47.81922Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-09T00:24:47.820014Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2024-01-09T00:24:47.820417Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2024-01-09T00:24:47.828763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2024-01-09T00:24:47.828866Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2024-01-09T00:24:48.166942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-09T00:24:48.167067Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-09T00:24:48.167128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2024-01-09T00:24:48.167178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2024-01-09T00:24:48.167219Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-09T00:24:48.167256Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2024-01-09T00:24:48.167298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-09T00:24:48.170638Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-979047 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-09T00:24:48.170814Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-09T00:24:48.170868Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-09T00:24:48.171994Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-09T00:24:48.17234Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2024-01-09T00:24:48.172421Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:24:48.172566Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-09T00:24:48.172585Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-09T00:24:48.174637Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:24:48.174715Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:24:48.174741Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 00:26:14 up  7:08,  0 users,  load average: 1.35, 1.53, 1.80
	Linux multinode-979047 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [51d7e406f430639ff01adbaac11893dfeb09d887cd9552efd28e7b82d2b9c1e9] <==
	I0109 00:25:09.623782       1 main.go:146] kindnetd IP family: "ipv4"
	I0109 00:25:09.623792       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0109 00:25:10.021434       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0109 00:25:10.021561       1 main.go:227] handling current node
	I0109 00:25:20.035760       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0109 00:25:20.035787       1 main.go:227] handling current node
	I0109 00:25:30.048868       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0109 00:25:30.049029       1 main.go:227] handling current node
	I0109 00:25:40.053517       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0109 00:25:40.053546       1 main.go:227] handling current node
	I0109 00:25:40.053557       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0109 00:25:40.053563       1 main.go:250] Node multinode-979047-m02 has CIDR [10.244.1.0/24] 
	I0109 00:25:40.053718       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0109 00:25:50.061055       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0109 00:25:50.061084       1 main.go:227] handling current node
	I0109 00:25:50.061095       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0109 00:25:50.061102       1 main.go:250] Node multinode-979047-m02 has CIDR [10.244.1.0/24] 
	I0109 00:26:00.072520       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0109 00:26:00.072554       1 main.go:227] handling current node
	I0109 00:26:00.072573       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0109 00:26:00.072579       1 main.go:250] Node multinode-979047-m02 has CIDR [10.244.1.0/24] 
	I0109 00:26:10.077762       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0109 00:26:10.077799       1 main.go:227] handling current node
	I0109 00:26:10.077810       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0109 00:26:10.077815       1 main.go:250] Node multinode-979047-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [aba17a1c7ee3696cb52e53b0da3af52e340bd803c5d311d87bbfc1c884794fbf] <==
	I0109 00:24:51.641486       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0109 00:24:51.641612       1 aggregator.go:166] initial CRD sync complete...
	I0109 00:24:51.641665       1 autoregister_controller.go:141] Starting autoregister controller
	I0109 00:24:51.641694       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0109 00:24:51.641725       1 cache.go:39] Caches are synced for autoregister controller
	I0109 00:24:51.656475       1 shared_informer.go:318] Caches are synced for configmaps
	I0109 00:24:51.660269       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0109 00:24:51.662545       1 controller.go:624] quota admission added evaluator for: namespaces
	I0109 00:24:51.711279       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0109 00:24:52.420597       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0109 00:24:52.429840       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0109 00:24:52.429866       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0109 00:24:52.983582       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0109 00:24:53.064278       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0109 00:24:53.193881       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0109 00:24:53.201169       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0109 00:24:53.202238       1 controller.go:624] quota admission added evaluator for: endpoints
	I0109 00:24:53.208425       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0109 00:24:53.591771       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0109 00:24:54.415886       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0109 00:24:54.434777       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0109 00:24:54.455230       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0109 00:25:07.126727       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0109 00:25:07.327985       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E0109 00:26:11.644382       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.58.2:36524->192.168.58.2:10250: write: broken pipe
	
	
	==> kube-controller-manager [079bc57587fb0e31668978667cc9f3d5efeeaac835d4941546130351680ad021] <==
	I0109 00:25:07.907269       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="100.677784ms"
	I0109 00:25:07.907373       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.557µs"
	I0109 00:25:10.373293       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="804.922µs"
	I0109 00:25:10.391774       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="105.461µs"
	I0109 00:25:11.420439       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0109 00:25:11.701501       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.042648ms"
	I0109 00:25:11.701614       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.503µs"
	I0109 00:25:32.318083       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-979047-m02\" does not exist"
	I0109 00:25:32.333488       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-979047-m02" podCIDRs=["10.244.1.0/24"]
	I0109 00:25:32.341945       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hz4tb"
	I0109 00:25:32.348221       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-v2s5z"
	I0109 00:25:36.424280       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-979047-m02"
	I0109 00:25:36.424351       1 event.go:307] "Event occurred" object="multinode-979047-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-979047-m02 event: Registered Node multinode-979047-m02 in Controller"
	I0109 00:26:04.145526       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-979047-m02"
	I0109 00:26:06.584532       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0109 00:26:06.606657       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-4v5vc"
	I0109 00:26:06.622938       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-bxf99"
	I0109 00:26:06.635362       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="50.528411ms"
	I0109 00:26:06.656417       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="20.949411ms"
	I0109 00:26:06.685046       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="28.492386ms"
	I0109 00:26:06.685220       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="68.808µs"
	I0109 00:26:08.981248       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.833946ms"
	I0109 00:26:08.981524       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="48.419µs"
	I0109 00:26:09.793199       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.696665ms"
	I0109 00:26:09.793926       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="52.07µs"
	
	
	==> kube-proxy [aa483c1cd9a04caf82fd15e6b8d8f8953a1c1c22ee0401181c1db6785097b2ab] <==
	I0109 00:25:09.359598       1 server_others.go:69] "Using iptables proxy"
	I0109 00:25:09.373756       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0109 00:25:09.396269       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0109 00:25:09.398315       1 server_others.go:152] "Using iptables Proxier"
	I0109 00:25:09.398390       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0109 00:25:09.398682       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0109 00:25:09.398763       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0109 00:25:09.399027       1 server.go:846] "Version info" version="v1.28.4"
	I0109 00:25:09.399348       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0109 00:25:09.400246       1 config.go:188] "Starting service config controller"
	I0109 00:25:09.400653       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0109 00:25:09.400721       1 config.go:97] "Starting endpoint slice config controller"
	I0109 00:25:09.400770       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0109 00:25:09.401330       1 config.go:315] "Starting node config controller"
	I0109 00:25:09.402747       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0109 00:25:09.501642       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0109 00:25:09.501755       1 shared_informer.go:318] Caches are synced for service config
	I0109 00:25:09.502925       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [fdf3ccdca2e33118285cf3a64fd3491ee27579ab80215d95ba5119708ae05f80] <==
	W0109 00:24:51.627914       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0109 00:24:51.627928       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0109 00:24:51.633771       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0109 00:24:51.633872       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0109 00:24:52.446391       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0109 00:24:52.446429       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0109 00:24:52.487890       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0109 00:24:52.487923       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0109 00:24:52.515775       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0109 00:24:52.515884       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0109 00:24:52.655823       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0109 00:24:52.655927       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0109 00:24:52.672410       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0109 00:24:52.672531       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0109 00:24:52.680709       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0109 00:24:52.680831       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0109 00:24:52.691933       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0109 00:24:52.692054       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0109 00:24:52.705758       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0109 00:24:52.705863       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0109 00:24:52.795934       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0109 00:24:52.796052       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0109 00:24:52.815148       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0109 00:24:52.815289       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0109 00:24:55.403234       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 09 00:25:07 multinode-979047 kubelet[1381]: I0109 00:25:07.433220    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttdxx\" (UniqueName: \"kubernetes.io/projected/0b49bb1e-f3f4-4760-bb78-97d8bc5ae4e6-kube-api-access-ttdxx\") pod \"kube-proxy-r5w9b\" (UID: \"0b49bb1e-f3f4-4760-bb78-97d8bc5ae4e6\") " pod="kube-system/kube-proxy-r5w9b"
	Jan 09 00:25:08 multinode-979047 kubelet[1381]: E0109 00:25:08.640732    1381 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Jan 09 00:25:08 multinode-979047 kubelet[1381]: E0109 00:25:08.640792    1381 projected.go:198] Error preparing data for projected volume kube-api-access-ttdxx for pod kube-system/kube-proxy-r5w9b: failed to sync configmap cache: timed out waiting for the condition
	Jan 09 00:25:08 multinode-979047 kubelet[1381]: E0109 00:25:08.640896    1381 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b49bb1e-f3f4-4760-bb78-97d8bc5ae4e6-kube-api-access-ttdxx podName:0b49bb1e-f3f4-4760-bb78-97d8bc5ae4e6 nodeName:}" failed. No retries permitted until 2024-01-09 00:25:09.140871157 +0000 UTC m=+14.750240074 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ttdxx" (UniqueName: "kubernetes.io/projected/0b49bb1e-f3f4-4760-bb78-97d8bc5ae4e6-kube-api-access-ttdxx") pod "kube-proxy-r5w9b" (UID: "0b49bb1e-f3f4-4760-bb78-97d8bc5ae4e6") : failed to sync configmap cache: timed out waiting for the condition
	Jan 09 00:25:08 multinode-979047 kubelet[1381]: E0109 00:25:08.671528    1381 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Jan 09 00:25:08 multinode-979047 kubelet[1381]: E0109 00:25:08.671576    1381 projected.go:198] Error preparing data for projected volume kube-api-access-gqlxw for pod kube-system/kindnet-b4fpt: failed to sync configmap cache: timed out waiting for the condition
	Jan 09 00:25:08 multinode-979047 kubelet[1381]: E0109 00:25:08.671651    1381 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/11e40151-521e-4937-90fb-feb0d88d49ce-kube-api-access-gqlxw podName:11e40151-521e-4937-90fb-feb0d88d49ce nodeName:}" failed. No retries permitted until 2024-01-09 00:25:09.171629196 +0000 UTC m=+14.780998113 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gqlxw" (UniqueName: "kubernetes.io/projected/11e40151-521e-4937-90fb-feb0d88d49ce-kube-api-access-gqlxw") pod "kindnet-b4fpt" (UID: "11e40151-521e-4937-90fb-feb0d88d49ce") : failed to sync configmap cache: timed out waiting for the condition
	Jan 09 00:25:09 multinode-979047 kubelet[1381]: W0109 00:25:09.225664    1381 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/4ab6ef7ad13d9d90167e1fb36a66ae45b1b7b7b23777e167f992d915692cf603/crio-4ca98030338847bbdc4d237075085abd0c330c002687bd4785b68f5e51a3b8d1 WatchSource:0}: Error finding container 4ca98030338847bbdc4d237075085abd0c330c002687bd4785b68f5e51a3b8d1: Status 404 returned error can't find the container with id 4ca98030338847bbdc4d237075085abd0c330c002687bd4785b68f5e51a3b8d1
	Jan 09 00:25:09 multinode-979047 kubelet[1381]: W0109 00:25:09.498828    1381 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/4ab6ef7ad13d9d90167e1fb36a66ae45b1b7b7b23777e167f992d915692cf603/crio-17a4ea8c79300a874952b862b5e7f092a1dbe270f09a8ed57f8610021bc81e8f WatchSource:0}: Error finding container 17a4ea8c79300a874952b862b5e7f092a1dbe270f09a8ed57f8610021bc81e8f: Status 404 returned error can't find the container with id 17a4ea8c79300a874952b862b5e7f092a1dbe270f09a8ed57f8610021bc81e8f
	Jan 09 00:25:09 multinode-979047 kubelet[1381]: I0109 00:25:09.693183    1381 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-b4fpt" podStartSLOduration=2.693138834 podCreationTimestamp="2024-01-09 00:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-09 00:25:09.677409096 +0000 UTC m=+15.286778013" watchObservedRunningTime="2024-01-09 00:25:09.693138834 +0000 UTC m=+15.302507751"
	Jan 09 00:25:10 multinode-979047 kubelet[1381]: I0109 00:25:10.344506    1381 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 09 00:25:10 multinode-979047 kubelet[1381]: I0109 00:25:10.371521    1381 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-r5w9b" podStartSLOduration=3.37147825 podCreationTimestamp="2024-01-09 00:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-09 00:25:09.693440638 +0000 UTC m=+15.302809563" watchObservedRunningTime="2024-01-09 00:25:10.37147825 +0000 UTC m=+15.980847166"
	Jan 09 00:25:10 multinode-979047 kubelet[1381]: I0109 00:25:10.371869    1381 topology_manager.go:215] "Topology Admit Handler" podUID="46759197-0373-4f95-ba9c-8065624d0f27" podNamespace="kube-system" podName="coredns-5dd5756b68-shbhd"
	Jan 09 00:25:10 multinode-979047 kubelet[1381]: I0109 00:25:10.376936    1381 topology_manager.go:215] "Topology Admit Handler" podUID="b69dd807-2575-40fe-87fc-53a9e39c9b2d" podNamespace="kube-system" podName="storage-provisioner"
	Jan 09 00:25:10 multinode-979047 kubelet[1381]: I0109 00:25:10.453541    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46759197-0373-4f95-ba9c-8065624d0f27-config-volume\") pod \"coredns-5dd5756b68-shbhd\" (UID: \"46759197-0373-4f95-ba9c-8065624d0f27\") " pod="kube-system/coredns-5dd5756b68-shbhd"
	Jan 09 00:25:10 multinode-979047 kubelet[1381]: I0109 00:25:10.453604    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b69dd807-2575-40fe-87fc-53a9e39c9b2d-tmp\") pod \"storage-provisioner\" (UID: \"b69dd807-2575-40fe-87fc-53a9e39c9b2d\") " pod="kube-system/storage-provisioner"
	Jan 09 00:25:10 multinode-979047 kubelet[1381]: I0109 00:25:10.453635    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w82zt\" (UniqueName: \"kubernetes.io/projected/b69dd807-2575-40fe-87fc-53a9e39c9b2d-kube-api-access-w82zt\") pod \"storage-provisioner\" (UID: \"b69dd807-2575-40fe-87fc-53a9e39c9b2d\") " pod="kube-system/storage-provisioner"
	Jan 09 00:25:10 multinode-979047 kubelet[1381]: I0109 00:25:10.453665    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqpm2\" (UniqueName: \"kubernetes.io/projected/46759197-0373-4f95-ba9c-8065624d0f27-kube-api-access-lqpm2\") pod \"coredns-5dd5756b68-shbhd\" (UID: \"46759197-0373-4f95-ba9c-8065624d0f27\") " pod="kube-system/coredns-5dd5756b68-shbhd"
	Jan 09 00:25:10 multinode-979047 kubelet[1381]: W0109 00:25:10.710613    1381 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/4ab6ef7ad13d9d90167e1fb36a66ae45b1b7b7b23777e167f992d915692cf603/crio-77cf86082128a9ac0ced40b39aeb40d66776076b1f3894dde1d36e5e81c6f076 WatchSource:0}: Error finding container 77cf86082128a9ac0ced40b39aeb40d66776076b1f3894dde1d36e5e81c6f076: Status 404 returned error can't find the container with id 77cf86082128a9ac0ced40b39aeb40d66776076b1f3894dde1d36e5e81c6f076
	Jan 09 00:25:10 multinode-979047 kubelet[1381]: W0109 00:25:10.711267    1381 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/4ab6ef7ad13d9d90167e1fb36a66ae45b1b7b7b23777e167f992d915692cf603/crio-8a8c41083ce898a3ca355f86b125e927cc22acd65f25730c56a8dfb8a5749006 WatchSource:0}: Error finding container 8a8c41083ce898a3ca355f86b125e927cc22acd65f25730c56a8dfb8a5749006: Status 404 returned error can't find the container with id 8a8c41083ce898a3ca355f86b125e927cc22acd65f25730c56a8dfb8a5749006
	Jan 09 00:25:11 multinode-979047 kubelet[1381]: I0109 00:25:11.692338    1381 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=3.69229434 podCreationTimestamp="2024-01-09 00:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-09 00:25:11.680135366 +0000 UTC m=+17.289504282" watchObservedRunningTime="2024-01-09 00:25:11.69229434 +0000 UTC m=+17.301663265"
	Jan 09 00:25:14 multinode-979047 kubelet[1381]: I0109 00:25:14.546077    1381 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-shbhd" podStartSLOduration=7.54603233 podCreationTimestamp="2024-01-09 00:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-09 00:25:11.692870001 +0000 UTC m=+17.302238917" watchObservedRunningTime="2024-01-09 00:25:14.54603233 +0000 UTC m=+20.155401255"
	Jan 09 00:26:06 multinode-979047 kubelet[1381]: I0109 00:26:06.644500    1381 topology_manager.go:215] "Topology Admit Handler" podUID="1b733dbe-e070-42ab-ad52-ad8256f0cdd5" podNamespace="default" podName="busybox-5bc68d56bd-bxf99"
	Jan 09 00:26:06 multinode-979047 kubelet[1381]: I0109 00:26:06.684671    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5vnl\" (UniqueName: \"kubernetes.io/projected/1b733dbe-e070-42ab-ad52-ad8256f0cdd5-kube-api-access-s5vnl\") pod \"busybox-5bc68d56bd-bxf99\" (UID: \"1b733dbe-e070-42ab-ad52-ad8256f0cdd5\") " pod="default/busybox-5bc68d56bd-bxf99"
	Jan 09 00:26:06 multinode-979047 kubelet[1381]: W0109 00:26:06.983721    1381 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/4ab6ef7ad13d9d90167e1fb36a66ae45b1b7b7b23777e167f992d915692cf603/crio-5d89837032d52d2e96170c64d643ddeacb42e77d9db4dd0f4b9a197de8507943 WatchSource:0}: Error finding container 5d89837032d52d2e96170c64d643ddeacb42e77d9db4dd0f4b9a197de8507943: Status 404 returned error can't find the container with id 5d89837032d52d2e96170c64d643ddeacb42e77d9db4dd0f4b9a197de8507943
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-979047 -n multinode-979047
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-979047 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.98s)

                                                
                                    
x
+
TestScheduledStopUnix (38.46s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-334324 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-334324 --memory=2048 --driver=docker  --container-runtime=crio: (33.616201404s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-334324 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-334324 -n scheduled-stop-334324
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-334324 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 1779820 running but should have been killed on reschedule of stop
panic.go:523: *** TestScheduledStopUnix FAILED at 2024-01-09 00:35:38.563974406 +0000 UTC m=+2104.113600192
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-334324
helpers_test.go:235: (dbg) docker inspect scheduled-stop-334324:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "de6198f7c8ab2c8594209e923e08b819634d075f99ee3d2c75189a0665856555",
	        "Created": "2024-01-09T00:35:09.864338137Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1778118,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-09T00:35:10.228670506Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a5be0745bf7211988da1521fe4ee64cb5f5dee2ca8e3061f061c5272199c616c",
	        "ResolvConfPath": "/var/lib/docker/containers/de6198f7c8ab2c8594209e923e08b819634d075f99ee3d2c75189a0665856555/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/de6198f7c8ab2c8594209e923e08b819634d075f99ee3d2c75189a0665856555/hostname",
	        "HostsPath": "/var/lib/docker/containers/de6198f7c8ab2c8594209e923e08b819634d075f99ee3d2c75189a0665856555/hosts",
	        "LogPath": "/var/lib/docker/containers/de6198f7c8ab2c8594209e923e08b819634d075f99ee3d2c75189a0665856555/de6198f7c8ab2c8594209e923e08b819634d075f99ee3d2c75189a0665856555-json.log",
	        "Name": "/scheduled-stop-334324",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-334324:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-334324",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/82939cfb1c01ccaa573c22f2e7a709ea8f6829cc082b3d37bc043262b311ef1b-init/diff:/var/lib/docker/overlay2/a443ad727e446e5b332ea48292deac5ef22cb43b6aa42ee65e414679b2407c31/diff",
	                "MergedDir": "/var/lib/docker/overlay2/82939cfb1c01ccaa573c22f2e7a709ea8f6829cc082b3d37bc043262b311ef1b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/82939cfb1c01ccaa573c22f2e7a709ea8f6829cc082b3d37bc043262b311ef1b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/82939cfb1c01ccaa573c22f2e7a709ea8f6829cc082b3d37bc043262b311ef1b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-334324",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-334324/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-334324",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-334324",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-334324",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eabb12ff564d1ff4790018bc7780cbd39bfc0d21dd0711b709f3d0695e6fb25e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34504"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34503"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34500"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34502"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34501"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/eabb12ff564d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-334324": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "de6198f7c8ab",
	                        "scheduled-stop-334324"
	                    ],
	                    "NetworkID": "3fd2d3df784b83ba32c6632d34f0d1d8daab09b0b15f24126199aa28f970489c",
	                    "EndpointID": "b127c3007c9920e8ecc8154aafcd0ab9389271a4069ebe674c1e363d9e0ab002",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-334324 -n scheduled-stop-334324
helpers_test.go:244: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p scheduled-stop-334324 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p scheduled-stop-334324 logs -n 25: (1.196399796s)
helpers_test.go:252: TestScheduledStopUnix logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| stop    | -p multinode-979047            | multinode-979047      | jenkins | v1.32.0 | 09 Jan 24 00:27 UTC | 09 Jan 24 00:27 UTC |
	| start   | -p multinode-979047            | multinode-979047      | jenkins | v1.32.0 | 09 Jan 24 00:27 UTC | 09 Jan 24 00:29 UTC |
	|         | --wait=true -v=8               |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	| node    | list -p multinode-979047       | multinode-979047      | jenkins | v1.32.0 | 09 Jan 24 00:29 UTC |                     |
	| node    | multinode-979047 node delete   | multinode-979047      | jenkins | v1.32.0 | 09 Jan 24 00:29 UTC | 09 Jan 24 00:29 UTC |
	|         | m03                            |                       |         |         |                     |                     |
	| stop    | multinode-979047 stop          | multinode-979047      | jenkins | v1.32.0 | 09 Jan 24 00:29 UTC | 09 Jan 24 00:30 UTC |
	| start   | -p multinode-979047            | multinode-979047      | jenkins | v1.32.0 | 09 Jan 24 00:30 UTC | 09 Jan 24 00:31 UTC |
	|         | --wait=true -v=8               |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	| node    | list -p multinode-979047       | multinode-979047      | jenkins | v1.32.0 | 09 Jan 24 00:31 UTC |                     |
	| start   | -p multinode-979047-m02        | multinode-979047-m02  | jenkins | v1.32.0 | 09 Jan 24 00:31 UTC |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	| start   | -p multinode-979047-m03        | multinode-979047-m03  | jenkins | v1.32.0 | 09 Jan 24 00:31 UTC | 09 Jan 24 00:32 UTC |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	| node    | add -p multinode-979047        | multinode-979047      | jenkins | v1.32.0 | 09 Jan 24 00:32 UTC |                     |
	| delete  | -p multinode-979047-m03        | multinode-979047-m03  | jenkins | v1.32.0 | 09 Jan 24 00:32 UTC | 09 Jan 24 00:32 UTC |
	| delete  | -p multinode-979047            | multinode-979047      | jenkins | v1.32.0 | 09 Jan 24 00:32 UTC | 09 Jan 24 00:32 UTC |
	| start   | -p test-preload-483637         | test-preload-483637   | jenkins | v1.32.0 | 09 Jan 24 00:32 UTC | 09 Jan 24 00:33 UTC |
	|         | --memory=2200                  |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	|         | --wait=true --preload=false    |                       |         |         |                     |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4   |                       |         |         |                     |                     |
	| image   | test-preload-483637 image pull | test-preload-483637   | jenkins | v1.32.0 | 09 Jan 24 00:33 UTC | 09 Jan 24 00:33 UTC |
	|         | gcr.io/k8s-minikube/busybox    |                       |         |         |                     |                     |
	| stop    | -p test-preload-483637         | test-preload-483637   | jenkins | v1.32.0 | 09 Jan 24 00:33 UTC | 09 Jan 24 00:33 UTC |
	| start   | -p test-preload-483637         | test-preload-483637   | jenkins | v1.32.0 | 09 Jan 24 00:33 UTC | 09 Jan 24 00:35 UTC |
	|         | --memory=2200                  |                       |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                       |         |         |                     |                     |
	|         | --wait=true --driver=docker    |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	| image   | test-preload-483637 image list | test-preload-483637   | jenkins | v1.32.0 | 09 Jan 24 00:35 UTC | 09 Jan 24 00:35 UTC |
	| delete  | -p test-preload-483637         | test-preload-483637   | jenkins | v1.32.0 | 09 Jan 24 00:35 UTC | 09 Jan 24 00:35 UTC |
	| start   | -p scheduled-stop-334324       | scheduled-stop-334324 | jenkins | v1.32.0 | 09 Jan 24 00:35 UTC | 09 Jan 24 00:35 UTC |
	|         | --memory=2048 --driver=docker  |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-334324       | scheduled-stop-334324 | jenkins | v1.32.0 | 09 Jan 24 00:35 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-334324       | scheduled-stop-334324 | jenkins | v1.32.0 | 09 Jan 24 00:35 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-334324       | scheduled-stop-334324 | jenkins | v1.32.0 | 09 Jan 24 00:35 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-334324       | scheduled-stop-334324 | jenkins | v1.32.0 | 09 Jan 24 00:35 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-334324       | scheduled-stop-334324 | jenkins | v1.32.0 | 09 Jan 24 00:35 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-334324       | scheduled-stop-334324 | jenkins | v1.32.0 | 09 Jan 24 00:35 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/09 00:35:04
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0109 00:35:04.429937 1777665 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:35:04.430084 1777665 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:35:04.430087 1777665 out.go:309] Setting ErrFile to fd 2...
	I0109 00:35:04.430092 1777665 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:35:04.430388 1777665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-1678586/.minikube/bin
	I0109 00:35:04.430834 1777665 out.go:303] Setting JSON to false
	I0109 00:35:04.431697 1777665 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":26247,"bootTime":1704734258,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0109 00:35:04.431763 1777665 start.go:138] virtualization:  
	I0109 00:35:04.434797 1777665 out.go:177] * [scheduled-stop-334324] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0109 00:35:04.437702 1777665 out.go:177]   - MINIKUBE_LOCATION=17830
	I0109 00:35:04.439651 1777665 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:35:04.437877 1777665 notify.go:220] Checking for updates...
	I0109 00:35:04.444652 1777665 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-1678586/kubeconfig
	I0109 00:35:04.447203 1777665 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-1678586/.minikube
	I0109 00:35:04.449685 1777665 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0109 00:35:04.451903 1777665 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0109 00:35:04.454375 1777665 driver.go:392] Setting default libvirt URI to qemu:///system
	I0109 00:35:04.478706 1777665 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0109 00:35:04.478833 1777665 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0109 00:35:04.559698 1777665 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-09 00:35:04.549107122 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0109 00:35:04.559791 1777665 docker.go:295] overlay module found
	I0109 00:35:04.562309 1777665 out.go:177] * Using the docker driver based on user configuration
	I0109 00:35:04.564727 1777665 start.go:298] selected driver: docker
	I0109 00:35:04.564736 1777665 start.go:902] validating driver "docker" against <nil>
	I0109 00:35:04.564753 1777665 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0109 00:35:04.565348 1777665 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0109 00:35:04.632834 1777665 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-09 00:35:04.623725385 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0109 00:35:04.632994 1777665 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0109 00:35:04.633228 1777665 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0109 00:35:04.635635 1777665 out.go:177] * Using Docker driver with root privileges
	I0109 00:35:04.637699 1777665 cni.go:84] Creating CNI manager for ""
	I0109 00:35:04.637729 1777665 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0109 00:35:04.637742 1777665 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0109 00:35:04.637751 1777665 start_flags.go:323] config:
	{Name:scheduled-stop-334324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:scheduled-stop-334324 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:35:04.640447 1777665 out.go:177] * Starting control plane node scheduled-stop-334324 in cluster scheduled-stop-334324
	I0109 00:35:04.642616 1777665 cache.go:121] Beginning downloading kic base image for docker with crio
	I0109 00:35:04.644767 1777665 out.go:177] * Pulling base image v0.0.42-1704751654-17830 ...
	I0109 00:35:04.647157 1777665 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:35:04.647208 1777665 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0109 00:35:04.647216 1777665 cache.go:56] Caching tarball of preloaded images
	I0109 00:35:04.647249 1777665 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon
	I0109 00:35:04.647295 1777665 preload.go:174] Found /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0109 00:35:04.647303 1777665 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0109 00:35:04.647651 1777665 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/config.json ...
	I0109 00:35:04.647670 1777665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/config.json: {Name:mk348b1941c6ef75971feb840b91335efc51bb04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:35:04.665501 1777665 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon, skipping pull
	I0109 00:35:04.665515 1777665 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 exists in daemon, skipping load
	I0109 00:35:04.665536 1777665 cache.go:194] Successfully downloaded all kic artifacts
	I0109 00:35:04.665598 1777665 start.go:365] acquiring machines lock for scheduled-stop-334324: {Name:mkbf528ccfaac1ca7b56716e87a67d5ec165192c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:35:04.665713 1777665 start.go:369] acquired machines lock for "scheduled-stop-334324" in 98.873µs
	I0109 00:35:04.665738 1777665 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-334324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:scheduled-stop-334324 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:35:04.665814 1777665 start.go:125] createHost starting for "" (driver="docker")
	I0109 00:35:04.670418 1777665 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0109 00:35:04.670683 1777665 start.go:159] libmachine.API.Create for "scheduled-stop-334324" (driver="docker")
	I0109 00:35:04.670726 1777665 client.go:168] LocalClient.Create starting
	I0109 00:35:04.670813 1777665 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem
	I0109 00:35:04.670844 1777665 main.go:141] libmachine: Decoding PEM data...
	I0109 00:35:04.670857 1777665 main.go:141] libmachine: Parsing certificate...
	I0109 00:35:04.670916 1777665 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem
	I0109 00:35:04.670932 1777665 main.go:141] libmachine: Decoding PEM data...
	I0109 00:35:04.670946 1777665 main.go:141] libmachine: Parsing certificate...
	I0109 00:35:04.671308 1777665 cli_runner.go:164] Run: docker network inspect scheduled-stop-334324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0109 00:35:04.687845 1777665 cli_runner.go:211] docker network inspect scheduled-stop-334324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0109 00:35:04.687926 1777665 network_create.go:281] running [docker network inspect scheduled-stop-334324] to gather additional debugging logs...
	I0109 00:35:04.687947 1777665 cli_runner.go:164] Run: docker network inspect scheduled-stop-334324
	W0109 00:35:04.705093 1777665 cli_runner.go:211] docker network inspect scheduled-stop-334324 returned with exit code 1
	I0109 00:35:04.705115 1777665 network_create.go:284] error running [docker network inspect scheduled-stop-334324]: docker network inspect scheduled-stop-334324: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network scheduled-stop-334324 not found
	I0109 00:35:04.705125 1777665 network_create.go:286] output of [docker network inspect scheduled-stop-334324]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network scheduled-stop-334324 not found
	
	** /stderr **
	I0109 00:35:04.705224 1777665 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0109 00:35:04.722219 1777665 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-105ffd575afe IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d2:7c:7b:ae} reservation:<nil>}
	I0109 00:35:04.722544 1777665 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-65d7500bf19c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:d3:cd:64:67} reservation:<nil>}
	I0109 00:35:04.722917 1777665 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002568850}
	I0109 00:35:04.722933 1777665 network_create.go:124] attempt to create docker network scheduled-stop-334324 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0109 00:35:04.722985 1777665 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-334324 scheduled-stop-334324
	I0109 00:35:04.796099 1777665 network_create.go:108] docker network scheduled-stop-334324 192.168.67.0/24 created
	I0109 00:35:04.796121 1777665 kic.go:121] calculated static IP "192.168.67.2" for the "scheduled-stop-334324" container
	I0109 00:35:04.796211 1777665 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0109 00:35:04.812185 1777665 cli_runner.go:164] Run: docker volume create scheduled-stop-334324 --label name.minikube.sigs.k8s.io=scheduled-stop-334324 --label created_by.minikube.sigs.k8s.io=true
	I0109 00:35:04.830674 1777665 oci.go:103] Successfully created a docker volume scheduled-stop-334324
	I0109 00:35:04.830742 1777665 cli_runner.go:164] Run: docker run --rm --name scheduled-stop-334324-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-334324 --entrypoint /usr/bin/test -v scheduled-stop-334324:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -d /var/lib
	I0109 00:35:05.379245 1777665 oci.go:107] Successfully prepared a docker volume scheduled-stop-334324
	I0109 00:35:05.379283 1777665 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:35:05.379301 1777665 kic.go:194] Starting extracting preloaded images to volume ...
	I0109 00:35:05.379374 1777665 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-334324:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -I lz4 -xf /preloaded.tar -C /extractDir
	I0109 00:35:09.782117 1777665 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-334324:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 -I lz4 -xf /preloaded.tar -C /extractDir: (4.402709048s)
	I0109 00:35:09.782139 1777665 kic.go:203] duration metric: took 4.402835 seconds to extract preloaded images to volume
	W0109 00:35:09.782277 1777665 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0109 00:35:09.782386 1777665 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0109 00:35:09.848577 1777665 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname scheduled-stop-334324 --name scheduled-stop-334324 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-334324 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=scheduled-stop-334324 --network scheduled-stop-334324 --ip 192.168.67.2 --volume scheduled-stop-334324:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617
	I0109 00:35:10.237401 1777665 cli_runner.go:164] Run: docker container inspect scheduled-stop-334324 --format={{.State.Running}}
	I0109 00:35:10.258791 1777665 cli_runner.go:164] Run: docker container inspect scheduled-stop-334324 --format={{.State.Status}}
	I0109 00:35:10.289559 1777665 cli_runner.go:164] Run: docker exec scheduled-stop-334324 stat /var/lib/dpkg/alternatives/iptables
	I0109 00:35:10.367992 1777665 oci.go:144] the created container "scheduled-stop-334324" has a running status.
	I0109 00:35:10.368012 1777665 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/scheduled-stop-334324/id_rsa...
	I0109 00:35:10.565545 1777665 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/scheduled-stop-334324/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0109 00:35:10.596093 1777665 cli_runner.go:164] Run: docker container inspect scheduled-stop-334324 --format={{.State.Status}}
	I0109 00:35:10.619741 1777665 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0109 00:35:10.619752 1777665 kic_runner.go:114] Args: [docker exec --privileged scheduled-stop-334324 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0109 00:35:10.712656 1777665 cli_runner.go:164] Run: docker container inspect scheduled-stop-334324 --format={{.State.Status}}
	I0109 00:35:10.737059 1777665 machine.go:88] provisioning docker machine ...
	I0109 00:35:10.737081 1777665 ubuntu.go:169] provisioning hostname "scheduled-stop-334324"
	I0109 00:35:10.737152 1777665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-334324
	I0109 00:35:10.760559 1777665 main.go:141] libmachine: Using SSH client type: native
	I0109 00:35:10.760975 1777665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34504 <nil> <nil>}
	I0109 00:35:10.760985 1777665 main.go:141] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-334324 && echo "scheduled-stop-334324" | sudo tee /etc/hostname
	I0109 00:35:10.762327 1777665 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0109 00:35:13.926620 1777665 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-334324
	
	I0109 00:35:13.926690 1777665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-334324
	I0109 00:35:13.951178 1777665 main.go:141] libmachine: Using SSH client type: native
	I0109 00:35:13.951575 1777665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34504 <nil> <nil>}
	I0109 00:35:13.951591 1777665 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-334324' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-334324/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-334324' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:35:14.099655 1777665 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:35:14.099676 1777665 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17830-1678586/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-1678586/.minikube}
	I0109 00:35:14.099694 1777665 ubuntu.go:177] setting up certificates
	I0109 00:35:14.099702 1777665 provision.go:83] configureAuth start
	I0109 00:35:14.099806 1777665 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-334324
	I0109 00:35:14.118344 1777665 provision.go:138] copyHostCerts
	I0109 00:35:14.118402 1777665 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem, removing ...
	I0109 00:35:14.118410 1777665 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem
	I0109 00:35:14.118526 1777665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem (1679 bytes)
	I0109 00:35:14.118631 1777665 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem, removing ...
	I0109 00:35:14.118635 1777665 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem
	I0109 00:35:14.118660 1777665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem (1082 bytes)
	I0109 00:35:14.118725 1777665 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem, removing ...
	I0109 00:35:14.118729 1777665 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem
	I0109 00:35:14.118751 1777665 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem (1123 bytes)
	I0109 00:35:14.118804 1777665 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-334324 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube scheduled-stop-334324]
	I0109 00:35:14.841357 1777665 provision.go:172] copyRemoteCerts
	I0109 00:35:14.841425 1777665 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:35:14.841465 1777665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-334324
	I0109 00:35:14.860987 1777665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34504 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/scheduled-stop-334324/id_rsa Username:docker}
	I0109 00:35:14.965224 1777665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0109 00:35:14.994144 1777665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0109 00:35:15.026493 1777665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:35:15.057994 1777665 provision.go:86] duration metric: configureAuth took 958.278207ms
	I0109 00:35:15.058011 1777665 ubuntu.go:193] setting minikube options for container-runtime
	I0109 00:35:15.058227 1777665 config.go:182] Loaded profile config "scheduled-stop-334324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:35:15.058332 1777665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-334324
	I0109 00:35:15.077024 1777665 main.go:141] libmachine: Using SSH client type: native
	I0109 00:35:15.077470 1777665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34504 <nil> <nil>}
	I0109 00:35:15.077483 1777665 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:35:15.340363 1777665 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:35:15.340375 1777665 machine.go:91] provisioned docker machine in 4.603303996s
	I0109 00:35:15.340383 1777665 client.go:171] LocalClient.Create took 10.669652222s
	I0109 00:35:15.340398 1777665 start.go:167] duration metric: libmachine.API.Create for "scheduled-stop-334324" took 10.669716887s
	I0109 00:35:15.340405 1777665 start.go:300] post-start starting for "scheduled-stop-334324" (driver="docker")
	I0109 00:35:15.340415 1777665 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:35:15.340485 1777665 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:35:15.340530 1777665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-334324
	I0109 00:35:15.358265 1777665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34504 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/scheduled-stop-334324/id_rsa Username:docker}
	I0109 00:35:15.461252 1777665 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:35:15.465390 1777665 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0109 00:35:15.465418 1777665 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0109 00:35:15.465428 1777665 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0109 00:35:15.465433 1777665 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0109 00:35:15.465442 1777665 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-1678586/.minikube/addons for local assets ...
	I0109 00:35:15.465498 1777665 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-1678586/.minikube/files for local assets ...
	I0109 00:35:15.465583 1777665 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem -> 16839672.pem in /etc/ssl/certs
	I0109 00:35:15.465685 1777665 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:35:15.475950 1777665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem --> /etc/ssl/certs/16839672.pem (1708 bytes)
	I0109 00:35:15.504880 1777665 start.go:303] post-start completed in 164.461012ms
	I0109 00:35:15.505263 1777665 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-334324
	I0109 00:35:15.522371 1777665 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/config.json ...
	I0109 00:35:15.522667 1777665 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0109 00:35:15.522708 1777665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-334324
	I0109 00:35:15.540500 1777665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34504 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/scheduled-stop-334324/id_rsa Username:docker}
	I0109 00:35:15.640349 1777665 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0109 00:35:15.645991 1777665 start.go:128] duration metric: createHost completed in 10.980161432s
	I0109 00:35:15.646006 1777665 start.go:83] releasing machines lock for "scheduled-stop-334324", held for 10.98028547s
	I0109 00:35:15.646083 1777665 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-334324
	I0109 00:35:15.663444 1777665 ssh_runner.go:195] Run: cat /version.json
	I0109 00:35:15.663485 1777665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-334324
	I0109 00:35:15.663535 1777665 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:35:15.663589 1777665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-334324
	I0109 00:35:15.681980 1777665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34504 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/scheduled-stop-334324/id_rsa Username:docker}
	I0109 00:35:15.692197 1777665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34504 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/scheduled-stop-334324/id_rsa Username:docker}
	I0109 00:35:15.908237 1777665 ssh_runner.go:195] Run: systemctl --version
	I0109 00:35:15.913796 1777665 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:35:16.059298 1777665 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0109 00:35:16.064734 1777665 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:35:16.089268 1777665 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0109 00:35:16.089349 1777665 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:35:16.127585 1777665 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0109 00:35:16.127597 1777665 start.go:475] detecting cgroup driver to use...
	I0109 00:35:16.127632 1777665 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0109 00:35:16.127692 1777665 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:35:16.145032 1777665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:35:16.158941 1777665 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:35:16.158997 1777665 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:35:16.175070 1777665 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:35:16.191984 1777665 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0109 00:35:16.293764 1777665 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:35:16.394980 1777665 docker.go:219] disabling docker service ...
	I0109 00:35:16.395036 1777665 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:35:16.417348 1777665 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:35:16.433036 1777665 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:35:16.542247 1777665 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:35:16.663278 1777665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:35:16.678853 1777665 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:35:16.700084 1777665 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0109 00:35:16.700150 1777665 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:35:16.712843 1777665 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0109 00:35:16.712923 1777665 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:35:16.725324 1777665 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:35:16.737537 1777665 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:35:16.748963 1777665 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0109 00:35:16.760517 1777665 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0109 00:35:16.770553 1777665 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0109 00:35:16.780935 1777665 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0109 00:35:16.879639 1777665 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0109 00:35:16.996942 1777665 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0109 00:35:16.997001 1777665 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0109 00:35:17.002139 1777665 start.go:543] Will wait 60s for crictl version
	I0109 00:35:17.002194 1777665 ssh_runner.go:195] Run: which crictl
	I0109 00:35:17.007030 1777665 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0109 00:35:17.057726 1777665 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0109 00:35:17.057812 1777665 ssh_runner.go:195] Run: crio --version
	I0109 00:35:17.099828 1777665 ssh_runner.go:195] Run: crio --version
	I0109 00:35:17.149251 1777665 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0109 00:35:17.151482 1777665 cli_runner.go:164] Run: docker network inspect scheduled-stop-334324 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0109 00:35:17.169370 1777665 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0109 00:35:17.174086 1777665 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:35:17.187444 1777665 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:35:17.187497 1777665 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:35:17.252030 1777665 crio.go:496] all images are preloaded for cri-o runtime.
	I0109 00:35:17.252043 1777665 crio.go:415] Images already preloaded, skipping extraction
	I0109 00:35:17.252098 1777665 ssh_runner.go:195] Run: sudo crictl images --output json
	I0109 00:35:17.299349 1777665 crio.go:496] all images are preloaded for cri-o runtime.
	I0109 00:35:17.299363 1777665 cache_images.go:84] Images are preloaded, skipping loading
	I0109 00:35:17.299437 1777665 ssh_runner.go:195] Run: crio config
	I0109 00:35:17.356217 1777665 cni.go:84] Creating CNI manager for ""
	I0109 00:35:17.356229 1777665 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0109 00:35:17.356255 1777665 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0109 00:35:17.356274 1777665 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-334324 NodeName:scheduled-stop-334324 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0109 00:35:17.356426 1777665 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "scheduled-stop-334324"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0109 00:35:17.356497 1777665 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=scheduled-stop-334324 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:scheduled-stop-334324 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0109 00:35:17.356570 1777665 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0109 00:35:17.367323 1777665 binaries.go:44] Found k8s binaries, skipping transfer
	I0109 00:35:17.367401 1777665 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0109 00:35:17.377815 1777665 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (431 bytes)
	I0109 00:35:17.398973 1777665 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0109 00:35:17.420256 1777665 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0109 00:35:17.441591 1777665 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0109 00:35:17.445918 1777665 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0109 00:35:17.459310 1777665 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324 for IP: 192.168.67.2
	I0109 00:35:17.459334 1777665 certs.go:190] acquiring lock for shared ca certs: {Name:mkd1a8a8c523b20f31a5839efb0f14edb2634692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:35:17.459481 1777665 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.key
	I0109 00:35:17.459517 1777665 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17830-1678586/.minikube/proxy-client-ca.key
	I0109 00:35:17.459561 1777665 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/client.key
	I0109 00:35:17.459570 1777665 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/client.crt with IP's: []
	I0109 00:35:17.734090 1777665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/client.crt ...
	I0109 00:35:17.734110 1777665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/client.crt: {Name:mk8fe5f68aae439b8c5f808a97b8a1aaeb6209bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:35:17.734311 1777665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/client.key ...
	I0109 00:35:17.734319 1777665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/client.key: {Name:mkadc2335bf959e6388a9c7e3115aafdd49ef016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:35:17.734421 1777665 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/apiserver.key.c7fa3a9e
	I0109 00:35:17.734432 1777665 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0109 00:35:18.182386 1777665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/apiserver.crt.c7fa3a9e ...
	I0109 00:35:18.182402 1777665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/apiserver.crt.c7fa3a9e: {Name:mk090e57e13d00799d2e333f14c96c7d548f59a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:35:18.182610 1777665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/apiserver.key.c7fa3a9e ...
	I0109 00:35:18.182623 1777665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/apiserver.key.c7fa3a9e: {Name:mke1880fa5e7215317d7486a3374a949f37873c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:35:18.182713 1777665 certs.go:337] copying /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/apiserver.crt
	I0109 00:35:18.182785 1777665 certs.go:341] copying /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/apiserver.key
	I0109 00:35:18.182834 1777665 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/proxy-client.key
	I0109 00:35:18.182844 1777665 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/proxy-client.crt with IP's: []
	I0109 00:35:18.918674 1777665 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/proxy-client.crt ...
	I0109 00:35:18.918690 1777665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/proxy-client.crt: {Name:mk12c9af9c1e3dc0c8de811ad115736093c46fa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:35:18.918894 1777665 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/proxy-client.key ...
	I0109 00:35:18.918903 1777665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/proxy-client.key: {Name:mk6f28af19a0b11101522fca804a20a77e668779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:35:18.919104 1777665 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/1683967.pem (1338 bytes)
	W0109 00:35:18.919143 1777665 certs.go:433] ignoring /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/1683967_empty.pem, impossibly tiny 0 bytes
	I0109 00:35:18.919152 1777665 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca-key.pem (1679 bytes)
	I0109 00:35:18.919180 1777665 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem (1082 bytes)
	I0109 00:35:18.919205 1777665 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem (1123 bytes)
	I0109 00:35:18.919285 1777665 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem (1679 bytes)
	I0109 00:35:18.919327 1777665 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem (1708 bytes)
	I0109 00:35:18.920519 1777665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0109 00:35:18.951672 1777665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0109 00:35:18.980960 1777665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0109 00:35:19.009750 1777665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/scheduled-stop-334324/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0109 00:35:19.038367 1777665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0109 00:35:19.066749 1777665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0109 00:35:19.094405 1777665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0109 00:35:19.122894 1777665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0109 00:35:19.150695 1777665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0109 00:35:19.178814 1777665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/1683967.pem --> /usr/share/ca-certificates/1683967.pem (1338 bytes)
	I0109 00:35:19.207015 1777665 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem --> /usr/share/ca-certificates/16839672.pem (1708 bytes)
	I0109 00:35:19.235251 1777665 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0109 00:35:19.256202 1777665 ssh_runner.go:195] Run: openssl version
	I0109 00:35:19.263143 1777665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0109 00:35:19.274333 1777665 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:35:19.278871 1777665 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  9 00:02 /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:35:19.278948 1777665 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0109 00:35:19.287323 1777665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0109 00:35:19.298467 1777665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1683967.pem && ln -fs /usr/share/ca-certificates/1683967.pem /etc/ssl/certs/1683967.pem"
	I0109 00:35:19.309783 1777665 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1683967.pem
	I0109 00:35:19.314287 1777665 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  9 00:09 /usr/share/ca-certificates/1683967.pem
	I0109 00:35:19.314362 1777665 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1683967.pem
	I0109 00:35:19.324145 1777665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1683967.pem /etc/ssl/certs/51391683.0"
	I0109 00:35:19.335500 1777665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16839672.pem && ln -fs /usr/share/ca-certificates/16839672.pem /etc/ssl/certs/16839672.pem"
	I0109 00:35:19.346808 1777665 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16839672.pem
	I0109 00:35:19.351523 1777665 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  9 00:09 /usr/share/ca-certificates/16839672.pem
	I0109 00:35:19.351576 1777665 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16839672.pem
	I0109 00:35:19.359992 1777665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16839672.pem /etc/ssl/certs/3ec20f2e.0"
	I0109 00:35:19.371299 1777665 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0109 00:35:19.375493 1777665 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0109 00:35:19.375549 1777665 kubeadm.go:404] StartCluster: {Name:scheduled-stop-334324 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:scheduled-stop-334324 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:35:19.375625 1777665 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0109 00:35:19.375690 1777665 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0109 00:35:19.415921 1777665 cri.go:89] found id: ""
	I0109 00:35:19.415989 1777665 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0109 00:35:19.426865 1777665 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0109 00:35:19.437199 1777665 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0109 00:35:19.437252 1777665 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0109 00:35:19.447649 1777665 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0109 00:35:19.447693 1777665 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0109 00:35:19.549118 1777665 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0109 00:35:19.629444 1777665 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0109 00:35:35.742631 1777665 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0109 00:35:35.742700 1777665 kubeadm.go:322] [preflight] Running pre-flight checks
	I0109 00:35:35.742805 1777665 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0109 00:35:35.742870 1777665 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0109 00:35:35.742902 1777665 kubeadm.go:322] OS: Linux
	I0109 00:35:35.742950 1777665 kubeadm.go:322] CGROUPS_CPU: enabled
	I0109 00:35:35.742995 1777665 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0109 00:35:35.743047 1777665 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0109 00:35:35.743109 1777665 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0109 00:35:35.743158 1777665 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0109 00:35:35.743208 1777665 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0109 00:35:35.743251 1777665 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0109 00:35:35.743295 1777665 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0109 00:35:35.743338 1777665 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0109 00:35:35.743405 1777665 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0109 00:35:35.743493 1777665 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0109 00:35:35.743579 1777665 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0109 00:35:35.743637 1777665 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0109 00:35:35.747914 1777665 out.go:204]   - Generating certificates and keys ...
	I0109 00:35:35.748010 1777665 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0109 00:35:35.748072 1777665 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0109 00:35:35.748133 1777665 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0109 00:35:35.748191 1777665 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0109 00:35:35.748247 1777665 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0109 00:35:35.748293 1777665 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0109 00:35:35.748342 1777665 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0109 00:35:35.748457 1777665 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-334324] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0109 00:35:35.748508 1777665 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0109 00:35:35.748621 1777665 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-334324] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0109 00:35:35.748681 1777665 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0109 00:35:35.748739 1777665 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0109 00:35:35.748780 1777665 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0109 00:35:35.748831 1777665 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0109 00:35:35.748878 1777665 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0109 00:35:35.748927 1777665 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0109 00:35:35.748992 1777665 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0109 00:35:35.749043 1777665 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0109 00:35:35.749119 1777665 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0109 00:35:35.749180 1777665 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0109 00:35:35.751463 1777665 out.go:204]   - Booting up control plane ...
	I0109 00:35:35.751572 1777665 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0109 00:35:35.751644 1777665 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0109 00:35:35.751705 1777665 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0109 00:35:35.751800 1777665 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0109 00:35:35.751878 1777665 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0109 00:35:35.751914 1777665 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0109 00:35:35.752056 1777665 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0109 00:35:35.752125 1777665 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503155 seconds
	I0109 00:35:35.752222 1777665 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0109 00:35:35.752336 1777665 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0109 00:35:35.752390 1777665 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0109 00:35:35.752562 1777665 kubeadm.go:322] [mark-control-plane] Marking the node scheduled-stop-334324 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0109 00:35:35.752613 1777665 kubeadm.go:322] [bootstrap-token] Using token: m2k7dy.yffzl73hgodiihn9
	I0109 00:35:35.754998 1777665 out.go:204]   - Configuring RBAC rules ...
	I0109 00:35:35.755140 1777665 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0109 00:35:35.755232 1777665 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0109 00:35:35.755392 1777665 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0109 00:35:35.755519 1777665 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0109 00:35:35.755633 1777665 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0109 00:35:35.755751 1777665 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0109 00:35:35.755865 1777665 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0109 00:35:35.755908 1777665 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0109 00:35:35.755953 1777665 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0109 00:35:35.755956 1777665 kubeadm.go:322] 
	I0109 00:35:35.756016 1777665 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0109 00:35:35.756021 1777665 kubeadm.go:322] 
	I0109 00:35:35.756098 1777665 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0109 00:35:35.756101 1777665 kubeadm.go:322] 
	I0109 00:35:35.756126 1777665 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0109 00:35:35.756185 1777665 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0109 00:35:35.756235 1777665 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0109 00:35:35.756238 1777665 kubeadm.go:322] 
	I0109 00:35:35.756292 1777665 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0109 00:35:35.756295 1777665 kubeadm.go:322] 
	I0109 00:35:35.756344 1777665 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0109 00:35:35.756348 1777665 kubeadm.go:322] 
	I0109 00:35:35.756400 1777665 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0109 00:35:35.756474 1777665 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0109 00:35:35.756541 1777665 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0109 00:35:35.756545 1777665 kubeadm.go:322] 
	I0109 00:35:35.756628 1777665 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0109 00:35:35.756705 1777665 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0109 00:35:35.756708 1777665 kubeadm.go:322] 
	I0109 00:35:35.756792 1777665 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token m2k7dy.yffzl73hgodiihn9 \
	I0109 00:35:35.756895 1777665 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2f5d2b90e0873ecdcc03ee1f37a9ff73145aa86994d578f7f9f8008617cee046 \
	I0109 00:35:35.756915 1777665 kubeadm.go:322] 	--control-plane 
	I0109 00:35:35.756919 1777665 kubeadm.go:322] 
	I0109 00:35:35.757005 1777665 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0109 00:35:35.757008 1777665 kubeadm.go:322] 
	I0109 00:35:35.757090 1777665 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token m2k7dy.yffzl73hgodiihn9 \
	I0109 00:35:35.757204 1777665 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2f5d2b90e0873ecdcc03ee1f37a9ff73145aa86994d578f7f9f8008617cee046 
	I0109 00:35:35.757211 1777665 cni.go:84] Creating CNI manager for ""
	I0109 00:35:35.757217 1777665 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0109 00:35:35.759333 1777665 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0109 00:35:35.761453 1777665 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0109 00:35:35.766845 1777665 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0109 00:35:35.766863 1777665 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0109 00:35:35.797311 1777665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0109 00:35:36.733381 1777665 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0109 00:35:36.733512 1777665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:35:36.733585 1777665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=scheduled-stop-334324 minikube.k8s.io/updated_at=2024_01_09T00_35_36_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0109 00:35:36.901010 1777665 ops.go:34] apiserver oom_adj: -16
	I0109 00:35:36.901049 1777665 kubeadm.go:1088] duration metric: took 167.594676ms to wait for elevateKubeSystemPrivileges.
	I0109 00:35:36.901060 1777665 kubeadm.go:406] StartCluster complete in 17.525516954s
	I0109 00:35:36.901077 1777665 settings.go:142] acquiring lock: {Name:mk0f4be07809726b91ed42aaaa2120516a2004e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:35:36.901136 1777665 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-1678586/kubeconfig
	I0109 00:35:36.901810 1777665 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-1678586/kubeconfig: {Name:mkd692fadb6f1e94cc8cf2ddbb66429fa6c0e8fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0109 00:35:36.903615 1777665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0109 00:35:36.903997 1777665 config.go:182] Loaded profile config "scheduled-stop-334324": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:35:36.904030 1777665 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0109 00:35:36.904086 1777665 addons.go:69] Setting storage-provisioner=true in profile "scheduled-stop-334324"
	I0109 00:35:36.904101 1777665 addons.go:237] Setting addon storage-provisioner=true in "scheduled-stop-334324"
	I0109 00:35:36.904153 1777665 host.go:66] Checking if "scheduled-stop-334324" exists ...
	I0109 00:35:36.904190 1777665 addons.go:69] Setting default-storageclass=true in profile "scheduled-stop-334324"
	I0109 00:35:36.904205 1777665 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-334324"
	I0109 00:35:36.904479 1777665 cli_runner.go:164] Run: docker container inspect scheduled-stop-334324 --format={{.State.Status}}
	I0109 00:35:36.904615 1777665 cli_runner.go:164] Run: docker container inspect scheduled-stop-334324 --format={{.State.Status}}
	I0109 00:35:36.970957 1777665 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0109 00:35:36.972673 1777665 addons.go:237] Setting addon default-storageclass=true in "scheduled-stop-334324"
	I0109 00:35:36.973396 1777665 host.go:66] Checking if "scheduled-stop-334324" exists ...
	I0109 00:35:36.973405 1777665 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:35:36.973415 1777665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0109 00:35:36.973474 1777665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-334324
	I0109 00:35:36.973857 1777665 cli_runner.go:164] Run: docker container inspect scheduled-stop-334324 --format={{.State.Status}}
	I0109 00:35:37.018718 1777665 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0109 00:35:37.018731 1777665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0109 00:35:37.018796 1777665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-334324
	I0109 00:35:37.041610 1777665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34504 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/scheduled-stop-334324/id_rsa Username:docker}
	I0109 00:35:37.064112 1777665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34504 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/scheduled-stop-334324/id_rsa Username:docker}
	I0109 00:35:37.078468 1777665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0109 00:35:37.249595 1777665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0109 00:35:37.271707 1777665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0109 00:35:37.406645 1777665 kapi.go:248] "coredns" deployment in "kube-system" namespace and "scheduled-stop-334324" context rescaled to 1 replicas
	I0109 00:35:37.406671 1777665 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0109 00:35:37.408959 1777665 out.go:177] * Verifying Kubernetes components...
	I0109 00:35:37.411396 1777665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:35:37.471579 1777665 start.go:929] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS's ConfigMap
	I0109 00:35:37.839938 1777665 api_server.go:52] waiting for apiserver process to appear ...
	I0109 00:35:37.839986 1777665 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:35:37.855337 1777665 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0109 00:35:37.857496 1777665 addons.go:508] enable addons completed in 953.456388ms: enabled=[storage-provisioner default-storageclass]
	I0109 00:35:37.865260 1777665 api_server.go:72] duration metric: took 458.559773ms to wait for apiserver process to appear ...
	I0109 00:35:37.865273 1777665 api_server.go:88] waiting for apiserver healthz status ...
	I0109 00:35:37.865291 1777665 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0109 00:35:37.874988 1777665 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0109 00:35:37.876308 1777665 api_server.go:141] control plane version: v1.28.4
	I0109 00:35:37.876322 1777665 api_server.go:131] duration metric: took 11.043165ms to wait for apiserver health ...
	I0109 00:35:37.876329 1777665 system_pods.go:43] waiting for kube-system pods to appear ...
	I0109 00:35:37.884059 1777665 system_pods.go:59] 5 kube-system pods found
	I0109 00:35:37.884080 1777665 system_pods.go:61] "etcd-scheduled-stop-334324" [98afd182-4462-4003-8760-72d43d369e3a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0109 00:35:37.884088 1777665 system_pods.go:61] "kube-apiserver-scheduled-stop-334324" [e040372c-63b4-4b0a-8523-02c3df00eaae] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0109 00:35:37.884096 1777665 system_pods.go:61] "kube-controller-manager-scheduled-stop-334324" [619ae420-5070-4197-97ac-efd73af20fa3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0109 00:35:37.884104 1777665 system_pods.go:61] "kube-scheduler-scheduled-stop-334324" [3f2beba2-b0ef-43d5-b11b-d314174cb861] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0109 00:35:37.884110 1777665 system_pods.go:61] "storage-provisioner" [0aee2edc-12ad-4545-a417-9f1769f999fb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0109 00:35:37.884115 1777665 system_pods.go:74] duration metric: took 7.782164ms to wait for pod list to return data ...
	I0109 00:35:37.884124 1777665 kubeadm.go:581] duration metric: took 477.432504ms to wait for : map[apiserver:true system_pods:true] ...
	I0109 00:35:37.884135 1777665 node_conditions.go:102] verifying NodePressure condition ...
	I0109 00:35:37.887572 1777665 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0109 00:35:37.887590 1777665 node_conditions.go:123] node cpu capacity is 2
	I0109 00:35:37.887600 1777665 node_conditions.go:105] duration metric: took 3.460855ms to run NodePressure ...
	I0109 00:35:37.887611 1777665 start.go:228] waiting for startup goroutines ...
	I0109 00:35:37.887618 1777665 start.go:233] waiting for cluster config update ...
	I0109 00:35:37.887626 1777665 start.go:242] writing updated cluster config ...
	I0109 00:35:37.887895 1777665 ssh_runner.go:195] Run: rm -f paused
	I0109 00:35:37.951839 1777665 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0109 00:35:37.954328 1777665 out.go:177] * Done! kubectl is now configured to use "scheduled-stop-334324" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 09 00:35:27 scheduled-stop-334324 crio[894]: time="2024-01-09 00:35:27.916840492Z" level=info msg="Ran pod sandbox 878d9e0569fa7a8c04f4e4cbdaf77a37fc55aa2e61a071f8f6bf551ba303c1f9 with infra container: kube-system/kube-scheduler-scheduled-stop-334324/POD" id=db398704-8ac1-45f4-b2b3-9c4a9cbdb421 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 09 00:35:27 scheduled-stop-334324 crio[894]: time="2024-01-09 00:35:27.920314369Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.28.4" id=7a93ef7b-000d-4b47-93b3-7c02b5e962e9 name=/runtime.v1.ImageService/ImageStatus
	Jan 09 00:35:27 scheduled-stop-334324 crio[894]: time="2024-01-09 00:35:27.920639877Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b,RepoTags:[registry.k8s.io/kube-controller-manager:v1.28.4],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e],Size_:117252916,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=7a93ef7b-000d-4b47-93b3-7c02b5e962e9 name=/runtime.v1.ImageService/ImageStatus
	Jan 09 00:35:27 scheduled-stop-334324 crio[894]: time="2024-01-09 00:35:27.922305451Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.28.4" id=c72265e5-40ae-4bb5-b214-213e1d83dcd0 name=/runtime.v1.ImageService/ImageStatus
	Jan 09 00:35:27 scheduled-stop-334324 crio[894]: time="2024-01-09 00:35:27.924263851Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54,RepoTags:[registry.k8s.io/kube-scheduler:v1.28.4],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe],Size_:59253556,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=c72265e5-40ae-4bb5-b214-213e1d83dcd0 name=/runtime.v1.ImageService/ImageStatus
	Jan 09 00:35:27 scheduled-stop-334324 crio[894]: time="2024-01-09 00:35:27.924625570Z" level=info msg="Creating container: kube-system/kube-controller-manager-scheduled-stop-334324/kube-controller-manager" id=fd2e7c97-9de7-49f4-ae26-ee10b496e770 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 09 00:35:27 scheduled-stop-334324 crio[894]: time="2024-01-09 00:35:27.924776209Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 09 00:35:27 scheduled-stop-334324 crio[894]: time="2024-01-09 00:35:27.930799113Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.28.4" id=d0c9af00-91b8-4211-9fbd-0904c42a9967 name=/runtime.v1.ImageService/ImageStatus
	Jan 09 00:35:27 scheduled-stop-334324 crio[894]: time="2024-01-09 00:35:27.937028484Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54,RepoTags:[registry.k8s.io/kube-scheduler:v1.28.4],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe],Size_:59253556,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=d0c9af00-91b8-4211-9fbd-0904c42a9967 name=/runtime.v1.ImageService/ImageStatus
	Jan 09 00:35:27 scheduled-stop-334324 crio[894]: time="2024-01-09 00:35:27.937986097Z" level=info msg="Creating container: kube-system/kube-scheduler-scheduled-stop-334324/kube-scheduler" id=c1d690d1-683b-47dc-adfd-ed149e80d4fa name=/runtime.v1.RuntimeService/CreateContainer
	Jan 09 00:35:27 scheduled-stop-334324 crio[894]: time="2024-01-09 00:35:27.938165947Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 09 00:35:27 scheduled-stop-334324 crio[894]: time="2024-01-09 00:35:27.994296753Z" level=info msg="Created container 1d58693c67e9372fe110c0256762c74a1ebdcbb0fc453d2a350125f30f4d767c: kube-system/etcd-scheduled-stop-334324/etcd" id=eec01485-90ef-4ff5-a5b6-8b98e0dc5f3f name=/runtime.v1.RuntimeService/CreateContainer
	Jan 09 00:35:27 scheduled-stop-334324 crio[894]: time="2024-01-09 00:35:27.995489436Z" level=info msg="Starting container: 1d58693c67e9372fe110c0256762c74a1ebdcbb0fc453d2a350125f30f4d767c" id=7789b101-273a-47ce-926f-4e742131ad30 name=/runtime.v1.RuntimeService/StartContainer
	Jan 09 00:35:28 scheduled-stop-334324 crio[894]: time="2024-01-09 00:35:28.026608725Z" level=info msg="Started container" PID=1213 containerID=1d58693c67e9372fe110c0256762c74a1ebdcbb0fc453d2a350125f30f4d767c description=kube-system/etcd-scheduled-stop-334324/etcd id=7789b101-273a-47ce-926f-4e742131ad30 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4e4c3e472b3bc4717444eac9bab3e4bc1e7ce568cd5342597228ce8a93435fd
	Jan 09 00:35:28 scheduled-stop-334324 crio[894]: time="2024-01-09 00:35:28.077125558Z" level=info msg="Created container 9b67556ec19dab597dda6e3462afa20a312ca0375c86233440cc56ae470812e4: kube-system/kube-controller-manager-scheduled-stop-334324/kube-controller-manager" id=fd2e7c97-9de7-49f4-ae26-ee10b496e770 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 09 00:35:28 scheduled-stop-334324 crio[894]: time="2024-01-09 00:35:28.077815352Z" level=info msg="Starting container: 9b67556ec19dab597dda6e3462afa20a312ca0375c86233440cc56ae470812e4" id=baec6c46-7786-4857-8f6c-6c42e80a963d name=/runtime.v1.RuntimeService/StartContainer
	Jan 09 00:35:28 scheduled-stop-334324 crio[894]: time="2024-01-09 00:35:28.101424979Z" level=info msg="Created container cd81d8fede50d35aef4714d925457d92077045498ab2f7934aaf12479b1222a1: kube-system/kube-apiserver-scheduled-stop-334324/kube-apiserver" id=ffdb29bf-b71c-44a6-94e0-8b1fbb973b67 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 09 00:35:28 scheduled-stop-334324 crio[894]: time="2024-01-09 00:35:28.102048508Z" level=info msg="Starting container: cd81d8fede50d35aef4714d925457d92077045498ab2f7934aaf12479b1222a1" id=5513e837-a7b4-489f-b66c-30b5ee5985e6 name=/runtime.v1.RuntimeService/StartContainer
	Jan 09 00:35:28 scheduled-stop-334324 crio[894]: time="2024-01-09 00:35:28.103424168Z" level=info msg="Created container ab37253c1290d551d9ca055ac806659c3ed3cb15451e37915a03e841e555689e: kube-system/kube-scheduler-scheduled-stop-334324/kube-scheduler" id=c1d690d1-683b-47dc-adfd-ed149e80d4fa name=/runtime.v1.RuntimeService/CreateContainer
	Jan 09 00:35:28 scheduled-stop-334324 crio[894]: time="2024-01-09 00:35:28.104000107Z" level=info msg="Starting container: ab37253c1290d551d9ca055ac806659c3ed3cb15451e37915a03e841e555689e" id=0203b13b-adc3-44f5-83a9-d6379e36e93e name=/runtime.v1.RuntimeService/StartContainer
	Jan 09 00:35:28 scheduled-stop-334324 crio[894]: time="2024-01-09 00:35:28.104443680Z" level=info msg="Started container" PID=1259 containerID=9b67556ec19dab597dda6e3462afa20a312ca0375c86233440cc56ae470812e4 description=kube-system/kube-controller-manager-scheduled-stop-334324/kube-controller-manager id=baec6c46-7786-4857-8f6c-6c42e80a963d name=/runtime.v1.RuntimeService/StartContainer sandboxID=914454d04ba046dbb8959b0159ee703a5f28cbbb92a897e6a185cfc35480983c
	Jan 09 00:35:28 scheduled-stop-334324 crio[894]: time="2024-01-09 00:35:28.123110905Z" level=info msg="Started container" PID=1280 containerID=cd81d8fede50d35aef4714d925457d92077045498ab2f7934aaf12479b1222a1 description=kube-system/kube-apiserver-scheduled-stop-334324/kube-apiserver id=5513e837-a7b4-489f-b66c-30b5ee5985e6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6c80ce1401b81ce1d067ef405d0b08cfc09cf1f88ce98629189621ae4c563cf7
	Jan 09 00:35:28 scheduled-stop-334324 crio[894]: time="2024-01-09 00:35:28.123390136Z" level=info msg="Started container" PID=1268 containerID=ab37253c1290d551d9ca055ac806659c3ed3cb15451e37915a03e841e555689e description=kube-system/kube-scheduler-scheduled-stop-334324/kube-scheduler id=0203b13b-adc3-44f5-83a9-d6379e36e93e name=/runtime.v1.RuntimeService/StartContainer sandboxID=878d9e0569fa7a8c04f4e4cbdaf77a37fc55aa2e61a071f8f6bf551ba303c1f9
	Jan 09 00:35:35 scheduled-stop-334324 crio[894]: time="2024-01-09 00:35:35.726188955Z" level=info msg="Checking image status: registry.k8s.io/pause:3.9" id=c3d1b1ff-0e30-4d4b-a5b8-4d7beccd52f9 name=/runtime.v1.ImageService/ImageStatus
	Jan 09 00:35:35 scheduled-stop-334324 crio[894]: time="2024-01-09 00:35:35.726374934Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6 registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097],Size_:520014,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},Info:map[string]string{},}" id=c3d1b1ff-0e30-4d4b-a5b8-4d7beccd52f9 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cd81d8fede50d       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419   11 seconds ago      Running             kube-apiserver            0                   6c80ce1401b81       kube-apiserver-scheduled-stop-334324
	ab37253c1290d       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54   11 seconds ago      Running             kube-scheduler            0                   878d9e0569fa7       kube-scheduler-scheduled-stop-334324
	9b67556ec19da       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b   11 seconds ago      Running             kube-controller-manager   0                   914454d04ba04       kube-controller-manager-scheduled-stop-334324
	1d58693c67e93       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   11 seconds ago      Running             etcd                      0                   a4e4c3e472b3b       etcd-scheduled-stop-334324
	
	
	==> describe nodes <==
	Name:               scheduled-stop-334324
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=scheduled-stop-334324
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=scheduled-stop-334324
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_09T00_35_36_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 09 Jan 2024 00:35:32 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-334324
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 09 Jan 2024 00:35:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 09 Jan 2024 00:35:35 +0000   Tue, 09 Jan 2024 00:35:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 09 Jan 2024 00:35:35 +0000   Tue, 09 Jan 2024 00:35:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 09 Jan 2024 00:35:35 +0000   Tue, 09 Jan 2024 00:35:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 09 Jan 2024 00:35:35 +0000   Tue, 09 Jan 2024 00:35:28 +0000   KubeletNotReady              [container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?]
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    scheduled-stop-334324
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 31f990c0f3394e9c9f5bfc214c4b13cc
	  System UUID:                a24361c8-2a2b-4be2-a531-918b0ca1599f
	  Boot ID:                    9a753e90-64b1-452a-8e10-9b878947801f
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-334324                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         3s
	  kube-system                 kube-apiserver-scheduled-stop-334324             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kube-controller-manager-scheduled-stop-334324    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-scheduler-scheduled-stop-334324             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 4s    kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  4s    kubelet  Node scheduled-stop-334324 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4s    kubelet  Node scheduled-stop-334324 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4s    kubelet  Node scheduled-stop-334324 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.001079] FS-Cache: O-key=[8] '2f76ed0000000000'
	[  +0.000720] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001023] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=0000000009be8c6c
	[  +0.001112] FS-Cache: N-key=[8] '2f76ed0000000000'
	[  +0.010607] FS-Cache: Duplicate cookie detected
	[  +0.000806] FS-Cache: O-cookie c=00000060 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001106] FS-Cache: O-cookie d=000000001df03bef{9p.inode} n=00000000b45aa7e6
	[  +0.001139] FS-Cache: O-key=[8] '2f76ed0000000000'
	[  +0.000750] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001054] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=00000000e9f33a46
	[  +0.001189] FS-Cache: N-key=[8] '2f76ed0000000000'
	[  +2.185619] FS-Cache: Duplicate cookie detected
	[  +0.000751] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.001094] FS-Cache: O-cookie d=000000001df03bef{9p.inode} n=00000000ed3c59a1
	[  +0.001046] FS-Cache: O-key=[8] '2e76ed0000000000'
	[  +0.000727] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000928] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=00000000f2938397
	[  +0.001085] FS-Cache: N-key=[8] '2e76ed0000000000'
	[  +0.397498] FS-Cache: Duplicate cookie detected
	[  +0.000731] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001010] FS-Cache: O-cookie d=000000001df03bef{9p.inode} n=000000005629db1e
	[  +0.001140] FS-Cache: O-key=[8] '3476ed0000000000'
	[  +0.000717] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.000990] FS-Cache: N-cookie d=000000001df03bef{9p.inode} n=0000000009be8c6c
	[  +0.001266] FS-Cache: N-key=[8] '3476ed0000000000'
	
	
	==> etcd [1d58693c67e9372fe110c0256762c74a1ebdcbb0fc453d2a350125f30f4d767c] <==
	{"level":"info","ts":"2024-01-09T00:35:28.210392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2024-01-09T00:35:28.210571Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2024-01-09T00:35:28.231215Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-01-09T00:35:28.23133Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-01-09T00:35:28.23141Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-09T00:35:28.232077Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-09T00:35:28.232111Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-09T00:35:29.166488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-09T00:35:29.166536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-09T00:35:29.166564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2024-01-09T00:35:29.166578Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2024-01-09T00:35:29.166584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-01-09T00:35:29.166595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2024-01-09T00:35:29.166603Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-01-09T00:35:29.170524Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:35:29.176665Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:scheduled-stop-334324 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-09T00:35:29.176704Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-09T00:35:29.177679Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-09T00:35:29.177736Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-09T00:35:29.178548Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2024-01-09T00:35:29.186514Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:35:29.186628Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:35:29.186657Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-09T00:35:29.206489Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-09T00:35:29.206531Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 00:35:39 up  7:18,  0 users,  load average: 1.07, 1.08, 1.39
	Linux scheduled-stop-334324 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [cd81d8fede50d35aef4714d925457d92077045498ab2f7934aaf12479b1222a1] <==
	I0109 00:35:32.509765       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0109 00:35:32.510037       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0109 00:35:32.510269       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0109 00:35:32.510301       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0109 00:35:32.536109       1 controller.go:624] quota admission added evaluator for: namespaces
	I0109 00:35:32.572090       1 shared_informer.go:318] Caches are synced for configmaps
	I0109 00:35:32.572487       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0109 00:35:32.584128       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0109 00:35:32.584245       1 aggregator.go:166] initial CRD sync complete...
	I0109 00:35:32.584279       1 autoregister_controller.go:141] Starting autoregister controller
	I0109 00:35:32.584318       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0109 00:35:32.584350       1 cache.go:39] Caches are synced for autoregister controller
	I0109 00:35:33.270996       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0109 00:35:33.276501       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0109 00:35:33.276598       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0109 00:35:33.836596       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0109 00:35:33.876600       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0109 00:35:33.944698       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0109 00:35:33.950242       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0109 00:35:33.951331       1 controller.go:624] quota admission added evaluator for: endpoints
	I0109 00:35:33.955472       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0109 00:35:34.416695       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0109 00:35:35.599404       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0109 00:35:35.615991       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0109 00:35:35.625718       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [9b67556ec19dab597dda6e3462afa20a312ca0375c86233440cc56ae470812e4] <==
	I0109 00:35:36.725776       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0109 00:35:36.725935       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0109 00:35:36.726012       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0109 00:35:36.726069       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0109 00:35:36.726156       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0109 00:35:36.726228       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0109 00:35:36.726276       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0109 00:35:36.726321       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0109 00:35:36.726377       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0109 00:35:36.726419       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0109 00:35:36.726516       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0109 00:35:36.726575       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0109 00:35:36.726621       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0109 00:35:36.726680       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0109 00:35:36.727002       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0109 00:35:36.727108       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0109 00:35:36.727171       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0109 00:35:36.727233       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0109 00:35:36.727298       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0109 00:35:36.727335       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0109 00:35:36.727389       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0109 00:35:36.872918       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0109 00:35:36.872992       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0109 00:35:36.873001       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0109 00:35:36.873008       1 shared_informer.go:318] Caches are synced for token_cleaner
	
	
	==> kube-scheduler [ab37253c1290d551d9ca055ac806659c3ed3cb15451e37915a03e841e555689e] <==
	E0109 00:35:32.537318       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0109 00:35:32.537337       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0109 00:35:32.537430       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0109 00:35:32.537504       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0109 00:35:32.537512       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0109 00:35:32.537851       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0109 00:35:32.537887       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0109 00:35:32.537943       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0109 00:35:33.409211       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0109 00:35:33.409358       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0109 00:35:33.477222       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0109 00:35:33.477254       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0109 00:35:33.482422       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0109 00:35:33.482495       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0109 00:35:33.484183       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0109 00:35:33.484274       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0109 00:35:33.486538       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0109 00:35:33.486563       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0109 00:35:33.512399       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0109 00:35:33.512510       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0109 00:35:33.596739       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0109 00:35:33.596857       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0109 00:35:33.630950       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0109 00:35:33.630983       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0109 00:35:36.526416       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 09 00:35:36 scheduled-stop-334324 kubelet[1383]: I0109 00:35:36.060033    1383 topology_manager.go:215] "Topology Admit Handler" podUID="3abf83a6f7c51d88981a442835ea8744" podNamespace="kube-system" podName="kube-scheduler-scheduled-stop-334324"
	Jan 09 00:35:36 scheduled-stop-334324 kubelet[1383]: E0109 00:35:36.097190    1383 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-scheduled-stop-334324\" already exists" pod="kube-system/kube-controller-manager-scheduled-stop-334324"
	Jan 09 00:35:36 scheduled-stop-334324 kubelet[1383]: E0109 00:35:36.097631    1383 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-scheduled-stop-334324\" already exists" pod="kube-system/kube-apiserver-scheduled-stop-334324"
	Jan 09 00:35:36 scheduled-stop-334324 kubelet[1383]: I0109 00:35:36.111888    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a4dcdfb37796dbcfeb4b92046cfa2ccb-ca-certs\") pod \"kube-apiserver-scheduled-stop-334324\" (UID: \"a4dcdfb37796dbcfeb4b92046cfa2ccb\") " pod="kube-system/kube-apiserver-scheduled-stop-334324"
	Jan 09 00:35:36 scheduled-stop-334324 kubelet[1383]: I0109 00:35:36.111931    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a4dcdfb37796dbcfeb4b92046cfa2ccb-k8s-certs\") pod \"kube-apiserver-scheduled-stop-334324\" (UID: \"a4dcdfb37796dbcfeb4b92046cfa2ccb\") " pod="kube-system/kube-apiserver-scheduled-stop-334324"
	Jan 09 00:35:36 scheduled-stop-334324 kubelet[1383]: I0109 00:35:36.111958    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a4dcdfb37796dbcfeb4b92046cfa2ccb-usr-local-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-334324\" (UID: \"a4dcdfb37796dbcfeb4b92046cfa2ccb\") " pod="kube-system/kube-apiserver-scheduled-stop-334324"
	Jan 09 00:35:36 scheduled-stop-334324 kubelet[1383]: I0109 00:35:36.111982    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54f6f5da88b40b3704fff96610f2e783-ca-certs\") pod \"kube-controller-manager-scheduled-stop-334324\" (UID: \"54f6f5da88b40b3704fff96610f2e783\") " pod="kube-system/kube-controller-manager-scheduled-stop-334324"
	Jan 09 00:35:36 scheduled-stop-334324 kubelet[1383]: I0109 00:35:36.112009    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a4dcdfb37796dbcfeb4b92046cfa2ccb-etc-ca-certificates\") pod \"kube-apiserver-scheduled-stop-334324\" (UID: \"a4dcdfb37796dbcfeb4b92046cfa2ccb\") " pod="kube-system/kube-apiserver-scheduled-stop-334324"
	Jan 09 00:35:36 scheduled-stop-334324 kubelet[1383]: I0109 00:35:36.112036    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54f6f5da88b40b3704fff96610f2e783-etc-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-334324\" (UID: \"54f6f5da88b40b3704fff96610f2e783\") " pod="kube-system/kube-controller-manager-scheduled-stop-334324"
	Jan 09 00:35:36 scheduled-stop-334324 kubelet[1383]: I0109 00:35:36.112059    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54f6f5da88b40b3704fff96610f2e783-k8s-certs\") pod \"kube-controller-manager-scheduled-stop-334324\" (UID: \"54f6f5da88b40b3704fff96610f2e783\") " pod="kube-system/kube-controller-manager-scheduled-stop-334324"
	Jan 09 00:35:36 scheduled-stop-334324 kubelet[1383]: I0109 00:35:36.112085    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54f6f5da88b40b3704fff96610f2e783-usr-local-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-334324\" (UID: \"54f6f5da88b40b3704fff96610f2e783\") " pod="kube-system/kube-controller-manager-scheduled-stop-334324"
	Jan 09 00:35:36 scheduled-stop-334324 kubelet[1383]: I0109 00:35:36.112111    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3abf83a6f7c51d88981a442835ea8744-kubeconfig\") pod \"kube-scheduler-scheduled-stop-334324\" (UID: \"3abf83a6f7c51d88981a442835ea8744\") " pod="kube-system/kube-scheduler-scheduled-stop-334324"
	Jan 09 00:35:36 scheduled-stop-334324 kubelet[1383]: I0109 00:35:36.112134    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/5e2b37a836d467df3e4cb47a23296e10-etcd-certs\") pod \"etcd-scheduled-stop-334324\" (UID: \"5e2b37a836d467df3e4cb47a23296e10\") " pod="kube-system/etcd-scheduled-stop-334324"
	Jan 09 00:35:36 scheduled-stop-334324 kubelet[1383]: I0109 00:35:36.112161    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/5e2b37a836d467df3e4cb47a23296e10-etcd-data\") pod \"etcd-scheduled-stop-334324\" (UID: \"5e2b37a836d467df3e4cb47a23296e10\") " pod="kube-system/etcd-scheduled-stop-334324"
	Jan 09 00:35:36 scheduled-stop-334324 kubelet[1383]: I0109 00:35:36.112186    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a4dcdfb37796dbcfeb4b92046cfa2ccb-usr-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-334324\" (UID: \"a4dcdfb37796dbcfeb4b92046cfa2ccb\") " pod="kube-system/kube-apiserver-scheduled-stop-334324"
	Jan 09 00:35:36 scheduled-stop-334324 kubelet[1383]: I0109 00:35:36.112209    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/54f6f5da88b40b3704fff96610f2e783-flexvolume-dir\") pod \"kube-controller-manager-scheduled-stop-334324\" (UID: \"54f6f5da88b40b3704fff96610f2e783\") " pod="kube-system/kube-controller-manager-scheduled-stop-334324"
	Jan 09 00:35:36 scheduled-stop-334324 kubelet[1383]: I0109 00:35:36.112234    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/54f6f5da88b40b3704fff96610f2e783-kubeconfig\") pod \"kube-controller-manager-scheduled-stop-334324\" (UID: \"54f6f5da88b40b3704fff96610f2e783\") " pod="kube-system/kube-controller-manager-scheduled-stop-334324"
	Jan 09 00:35:36 scheduled-stop-334324 kubelet[1383]: I0109 00:35:36.112980    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54f6f5da88b40b3704fff96610f2e783-usr-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-334324\" (UID: \"54f6f5da88b40b3704fff96610f2e783\") " pod="kube-system/kube-controller-manager-scheduled-stop-334324"
	Jan 09 00:35:36 scheduled-stop-334324 kubelet[1383]: I0109 00:35:36.677113    1383 apiserver.go:52] "Watching apiserver"
	Jan 09 00:35:36 scheduled-stop-334324 kubelet[1383]: I0109 00:35:36.711373    1383 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Jan 09 00:35:36 scheduled-stop-334324 kubelet[1383]: E0109 00:35:36.856927    1383 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-scheduled-stop-334324\" already exists" pod="kube-system/kube-apiserver-scheduled-stop-334324"
	Jan 09 00:35:36 scheduled-stop-334324 kubelet[1383]: I0109 00:35:36.858328    1383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-334324" podStartSLOduration=0.858270172 podCreationTimestamp="2024-01-09 00:35:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-09 00:35:36.857678077 +0000 UTC m=+1.289201341" watchObservedRunningTime="2024-01-09 00:35:36.858270172 +0000 UTC m=+1.289793436"
	Jan 09 00:35:36 scheduled-stop-334324 kubelet[1383]: I0109 00:35:36.882402    1383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-334324" podStartSLOduration=0.882361789 podCreationTimestamp="2024-01-09 00:35:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-09 00:35:36.880578216 +0000 UTC m=+1.312101488" watchObservedRunningTime="2024-01-09 00:35:36.882361789 +0000 UTC m=+1.313885052"
	Jan 09 00:35:37 scheduled-stop-334324 kubelet[1383]: I0109 00:35:37.034393    1383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-334324" podStartSLOduration=3.034346777 podCreationTimestamp="2024-01-09 00:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-09 00:35:36.932153528 +0000 UTC m=+1.363676792" watchObservedRunningTime="2024-01-09 00:35:37.034346777 +0000 UTC m=+1.465870066"
	Jan 09 00:35:37 scheduled-stop-334324 kubelet[1383]: I0109 00:35:37.073021    1383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-334324" podStartSLOduration=2.072979986 podCreationTimestamp="2024-01-09 00:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-09 00:35:37.03499393 +0000 UTC m=+1.466517193" watchObservedRunningTime="2024-01-09 00:35:37.072979986 +0000 UTC m=+1.504503250"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p scheduled-stop-334324 -n scheduled-stop-334324
helpers_test.go:261: (dbg) Run:  kubectl --context scheduled-stop-334324 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context scheduled-stop-334324 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context scheduled-stop-334324 describe pod storage-provisioner: exit status 1 (90.902065ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context scheduled-stop-334324 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-334324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-334324
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-334324: (1.983988726s)
--- FAIL: TestScheduledStopUnix (38.46s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (114.77s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.3350261060.exe start -p running-upgrade-197513 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.3350261060.exe start -p running-upgrade-197513 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m44.754952157s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-197513 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-197513 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (5.199596943s)

                                                
                                                
-- stdout --
	* [running-upgrade-197513] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-1678586/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-1678586/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-197513 in cluster running-upgrade-197513
	* Pulling base image v0.0.42-1704751654-17830 ...
	* Updating the running docker "running-upgrade-197513" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0109 00:41:39.668050 1810951 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:41:39.668301 1810951 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:41:39.668326 1810951 out.go:309] Setting ErrFile to fd 2...
	I0109 00:41:39.668345 1810951 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:41:39.668643 1810951 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-1678586/.minikube/bin
	I0109 00:41:39.669060 1810951 out.go:303] Setting JSON to false
	I0109 00:41:39.670133 1810951 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":26642,"bootTime":1704734258,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0109 00:41:39.670232 1810951 start.go:138] virtualization:  
	I0109 00:41:39.673992 1810951 out.go:177] * [running-upgrade-197513] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0109 00:41:39.678257 1810951 out.go:177]   - MINIKUBE_LOCATION=17830
	I0109 00:41:39.678332 1810951 notify.go:220] Checking for updates...
	I0109 00:41:39.681119 1810951 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:41:39.683250 1810951 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-1678586/kubeconfig
	I0109 00:41:39.685478 1810951 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-1678586/.minikube
	I0109 00:41:39.687306 1810951 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0109 00:41:39.689177 1810951 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0109 00:41:39.691584 1810951 config.go:182] Loaded profile config "running-upgrade-197513": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0109 00:41:39.693953 1810951 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0109 00:41:39.696059 1810951 driver.go:392] Setting default libvirt URI to qemu:///system
	I0109 00:41:39.758581 1810951 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0109 00:41:39.758697 1810951 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0109 00:41:39.921980 1810951 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:54 SystemTime:2024-01-09 00:41:39.908401123 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0109 00:41:39.922083 1810951 docker.go:295] overlay module found
	I0109 00:41:39.925691 1810951 out.go:177] * Using the docker driver based on existing profile
	I0109 00:41:39.927968 1810951 start.go:298] selected driver: docker
	I0109 00:41:39.927988 1810951 start.go:902] validating driver "docker" against &{Name:running-upgrade-197513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-197513 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.138 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0109 00:41:39.928079 1810951 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0109 00:41:39.928689 1810951 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0109 00:41:40.058575 1810951 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:54 SystemTime:2024-01-09 00:41:40.04798073 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0109 00:41:40.058937 1810951 cni.go:84] Creating CNI manager for ""
	I0109 00:41:40.058962 1810951 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0109 00:41:40.058976 1810951 start_flags.go:323] config:
	{Name:running-upgrade-197513 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-197513 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.138 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0109 00:41:40.061419 1810951 out.go:177] * Starting control plane node running-upgrade-197513 in cluster running-upgrade-197513
	I0109 00:41:40.063393 1810951 cache.go:121] Beginning downloading kic base image for docker with crio
	I0109 00:41:40.065820 1810951 out.go:177] * Pulling base image v0.0.42-1704751654-17830 ...
	I0109 00:41:40.068277 1810951 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0109 00:41:40.068441 1810951 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0109 00:41:40.110556 1810951 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0109 00:41:40.110582 1810951 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0109 00:41:40.135245 1810951 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0109 00:41:40.135408 1810951 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/running-upgrade-197513/config.json ...
	I0109 00:41:40.135699 1810951 cache.go:194] Successfully downloaded all kic artifacts
	I0109 00:41:40.135754 1810951 start.go:365] acquiring machines lock for running-upgrade-197513: {Name:mkc69ac174383e2fe66f9404b9481ce812cfdb33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:41:40.135811 1810951 start.go:369] acquired machines lock for "running-upgrade-197513" in 33.149µs
	I0109 00:41:40.135826 1810951 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:41:40.135833 1810951 fix.go:54] fixHost starting: 
	I0109 00:41:40.136124 1810951 cli_runner.go:164] Run: docker container inspect running-upgrade-197513 --format={{.State.Status}}
	I0109 00:41:40.136387 1810951 cache.go:107] acquiring lock: {Name:mk3bff1da4c2c9d99b8d2eaa6644fd637ad4fc93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:41:40.136447 1810951 cache.go:115] /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0109 00:41:40.136455 1810951 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 72.78µs
	I0109 00:41:40.136463 1810951 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0109 00:41:40.136482 1810951 cache.go:107] acquiring lock: {Name:mk3b40c0c9f88bfd61767222d81202cf3e22a163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:41:40.136513 1810951 cache.go:115] /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0109 00:41:40.136521 1810951 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 38.991µs
	I0109 00:41:40.136532 1810951 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0109 00:41:40.136541 1810951 cache.go:107] acquiring lock: {Name:mk73c5240ae84c48665d067b63f65c779eef85b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:41:40.136571 1810951 cache.go:115] /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0109 00:41:40.136583 1810951 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 43.34µs
	I0109 00:41:40.136591 1810951 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0109 00:41:40.136599 1810951 cache.go:107] acquiring lock: {Name:mk9c7f656405221a0cc8f00eef48a4312a5772a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:41:40.136624 1810951 cache.go:115] /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0109 00:41:40.136629 1810951 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 31.294µs
	I0109 00:41:40.136640 1810951 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0109 00:41:40.136650 1810951 cache.go:107] acquiring lock: {Name:mk7ff1e446f4b29ae3f85102f468527aee0604ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:41:40.136679 1810951 cache.go:115] /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0109 00:41:40.136684 1810951 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 35.988µs
	I0109 00:41:40.136691 1810951 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0109 00:41:40.136699 1810951 cache.go:107] acquiring lock: {Name:mk822689e0c91f93c42f62581fcf619ccb20e1e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:41:40.136724 1810951 cache.go:115] /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0109 00:41:40.136736 1810951 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 30.557µs
	I0109 00:41:40.136743 1810951 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0109 00:41:40.136750 1810951 cache.go:107] acquiring lock: {Name:mka42d349ef27271ebaa44714a66503b1199159c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:41:40.136779 1810951 cache.go:115] /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0109 00:41:40.136784 1810951 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 34.65µs
	I0109 00:41:40.136794 1810951 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0109 00:41:40.136802 1810951 cache.go:107] acquiring lock: {Name:mkaa8567c2e8babe1144025b47a0b79e69be89fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:41:40.136830 1810951 cache.go:115] /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0109 00:41:40.136835 1810951 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 34.297µs
	I0109 00:41:40.136841 1810951 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0109 00:41:40.136846 1810951 cache.go:87] Successfully saved all images to host disk.
	I0109 00:41:40.171892 1810951 fix.go:102] recreateIfNeeded on running-upgrade-197513: state=Running err=<nil>
	W0109 00:41:40.171919 1810951 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:41:40.175046 1810951 out.go:177] * Updating the running docker "running-upgrade-197513" container ...
	I0109 00:41:40.177278 1810951 machine.go:88] provisioning docker machine ...
	I0109 00:41:40.177307 1810951 ubuntu.go:169] provisioning hostname "running-upgrade-197513"
	I0109 00:41:40.177378 1810951 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-197513
	I0109 00:41:40.200672 1810951 main.go:141] libmachine: Using SSH client type: native
	I0109 00:41:40.201144 1810951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34550 <nil> <nil>}
	I0109 00:41:40.201157 1810951 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-197513 && echo "running-upgrade-197513" | sudo tee /etc/hostname
	I0109 00:41:40.381170 1810951 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-197513
	
	I0109 00:41:40.382282 1810951 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-197513
	I0109 00:41:40.419366 1810951 main.go:141] libmachine: Using SSH client type: native
	I0109 00:41:40.419778 1810951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34550 <nil> <nil>}
	I0109 00:41:40.419799 1810951 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-197513' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-197513/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-197513' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:41:40.652223 1810951 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:41:40.652250 1810951 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17830-1678586/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-1678586/.minikube}
	I0109 00:41:40.652278 1810951 ubuntu.go:177] setting up certificates
	I0109 00:41:40.652302 1810951 provision.go:83] configureAuth start
	I0109 00:41:40.652383 1810951 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-197513
	I0109 00:41:40.675903 1810951 provision.go:138] copyHostCerts
	I0109 00:41:40.675988 1810951 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem, removing ...
	I0109 00:41:40.676002 1810951 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem
	I0109 00:41:40.676080 1810951 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem (1082 bytes)
	I0109 00:41:40.676189 1810951 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem, removing ...
	I0109 00:41:40.676200 1810951 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem
	I0109 00:41:40.676238 1810951 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem (1123 bytes)
	I0109 00:41:40.676297 1810951 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem, removing ...
	I0109 00:41:40.676304 1810951 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem
	I0109 00:41:40.676329 1810951 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem (1679 bytes)
	I0109 00:41:40.676381 1810951 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-197513 san=[192.168.70.138 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-197513]
	I0109 00:41:41.219740 1810951 provision.go:172] copyRemoteCerts
	I0109 00:41:41.219839 1810951 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:41:41.219888 1810951 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-197513
	I0109 00:41:41.239179 1810951 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34550 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/running-upgrade-197513/id_rsa Username:docker}
	I0109 00:41:41.344695 1810951 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:41:41.384087 1810951 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0109 00:41:41.440352 1810951 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0109 00:41:41.464491 1810951 provision.go:86] duration metric: configureAuth took 812.169662ms
	I0109 00:41:41.464527 1810951 ubuntu.go:193] setting minikube options for container-runtime
	I0109 00:41:41.464724 1810951 config.go:182] Loaded profile config "running-upgrade-197513": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0109 00:41:41.464846 1810951 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-197513
	I0109 00:41:41.488925 1810951 main.go:141] libmachine: Using SSH client type: native
	I0109 00:41:41.489370 1810951 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34550 <nil> <nil>}
	I0109 00:41:41.489396 1810951 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:41:42.172370 1810951 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:41:42.172398 1810951 machine.go:91] provisioned docker machine in 1.99510073s
	I0109 00:41:42.172408 1810951 start.go:300] post-start starting for "running-upgrade-197513" (driver="docker")
	I0109 00:41:42.172420 1810951 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:41:42.172493 1810951 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:41:42.172563 1810951 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-197513
	I0109 00:41:42.204003 1810951 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34550 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/running-upgrade-197513/id_rsa Username:docker}
	I0109 00:41:42.309654 1810951 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:41:42.314321 1810951 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0109 00:41:42.314352 1810951 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0109 00:41:42.314364 1810951 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0109 00:41:42.314375 1810951 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0109 00:41:42.314391 1810951 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-1678586/.minikube/addons for local assets ...
	I0109 00:41:42.314463 1810951 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-1678586/.minikube/files for local assets ...
	I0109 00:41:42.314556 1810951 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem -> 16839672.pem in /etc/ssl/certs
	I0109 00:41:42.314671 1810951 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:41:42.323905 1810951 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem --> /etc/ssl/certs/16839672.pem (1708 bytes)
	I0109 00:41:42.348967 1810951 start.go:303] post-start completed in 176.543603ms
	I0109 00:41:42.349061 1810951 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0109 00:41:42.349116 1810951 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-197513
	I0109 00:41:42.383224 1810951 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34550 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/running-upgrade-197513/id_rsa Username:docker}
	I0109 00:41:42.497842 1810951 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0109 00:41:42.504373 1810951 fix.go:56] fixHost completed within 2.368533577s
	I0109 00:41:42.504399 1810951 start.go:83] releasing machines lock for "running-upgrade-197513", held for 2.368579485s
	I0109 00:41:42.504466 1810951 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-197513
	I0109 00:41:42.532095 1810951 ssh_runner.go:195] Run: cat /version.json
	I0109 00:41:42.532151 1810951 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-197513
	I0109 00:41:42.532365 1810951 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:41:42.532402 1810951 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-197513
	I0109 00:41:42.574200 1810951 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34550 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/running-upgrade-197513/id_rsa Username:docker}
	I0109 00:41:42.577827 1810951 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34550 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/running-upgrade-197513/id_rsa Username:docker}
	W0109 00:41:42.680506 1810951 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0109 00:41:42.680596 1810951 ssh_runner.go:195] Run: systemctl --version
	I0109 00:41:42.842668 1810951 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:41:43.082340 1810951 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0109 00:41:43.089088 1810951 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:41:43.116090 1810951 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0109 00:41:43.116176 1810951 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:41:43.195265 1810951 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:41:43.195300 1810951 start.go:475] detecting cgroup driver to use...
	I0109 00:41:43.195330 1810951 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0109 00:41:43.195391 1810951 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:41:43.285621 1810951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:41:43.305234 1810951 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:41:43.305307 1810951 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:41:43.323750 1810951 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:41:43.339036 1810951 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0109 00:41:43.353555 1810951 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0109 00:41:43.353649 1810951 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:41:43.761018 1810951 docker.go:219] disabling docker service ...
	I0109 00:41:43.761097 1810951 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:41:43.820008 1810951 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:41:43.919723 1810951 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:41:44.394939 1810951 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:41:44.673126 1810951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:41:44.695294 1810951 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:41:44.733193 1810951 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0109 00:41:44.733275 1810951 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:41:44.748005 1810951 out.go:177] 
	W0109 00:41:44.750297 1810951 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0109 00:41:44.750322 1810951 out.go:239] * 
	* 
	W0109 00:41:44.751464 1810951 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0109 00:41:44.752958 1810951 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-197513 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2024-01-09 00:41:44.790161678 +0000 UTC m=+2470.339787472
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-197513
helpers_test.go:235: (dbg) docker inspect running-upgrade-197513:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "02016c1f52c95bf905a9cae4b3599523819cc2ce2925786edb3595453b263252",
	        "Created": "2024-01-09T00:40:16.20052256Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1803041,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-09T00:40:17.244356721Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/02016c1f52c95bf905a9cae4b3599523819cc2ce2925786edb3595453b263252/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/02016c1f52c95bf905a9cae4b3599523819cc2ce2925786edb3595453b263252/hostname",
	        "HostsPath": "/var/lib/docker/containers/02016c1f52c95bf905a9cae4b3599523819cc2ce2925786edb3595453b263252/hosts",
	        "LogPath": "/var/lib/docker/containers/02016c1f52c95bf905a9cae4b3599523819cc2ce2925786edb3595453b263252/02016c1f52c95bf905a9cae4b3599523819cc2ce2925786edb3595453b263252-json.log",
	        "Name": "/running-upgrade-197513",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-197513:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-197513",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/73ca39f9acc5bff9f8e5983115bbe0afe64b59bba02056e1a6583678220d1cf4-init/diff:/var/lib/docker/overlay2/75fa33501ce61322980c65cbf192cfe5fd35ca9fddab21c6d5df3acedd4c553d/diff:/var/lib/docker/overlay2/fdcc533a40e33c4ae5141da0c43121649c1de9c83b74faf64c08ef85efc0f59d/diff:/var/lib/docker/overlay2/a43f6594bf3a7d7a74c780867e9906d9d774f86b34d9877aa130fd29b40cc2f6/diff:/var/lib/docker/overlay2/9a6d0260ef788b8b7d620b864576362c38f3351482cc38fc1910e486d4eef11d/diff:/var/lib/docker/overlay2/aa2063bac13b26dae106703ad8f3dfd98e5acd8b3fa61f2f7b20af443c9e8b1e/diff:/var/lib/docker/overlay2/553a50faee1d88367ad4e288189c2e4457325958b5f0cf25448868d87b183482/diff:/var/lib/docker/overlay2/d6c61ec29316c1ab37f6b103a2e5a5a72b0ba1189d0a241baff596d3d18aefb8/diff:/var/lib/docker/overlay2/ded69932de0adb7126a0ca8648dc404b81a82d630b145cfc20a860f97a5eceb7/diff:/var/lib/docker/overlay2/42aecde3d15f474dbb1d83651289283b275c5cabc22f99d24a33b9593a017e8c/diff:/var/lib/docker/overlay2/24f9a7
f492301a9bfaa0bec272fd999ae7e89078cf4a48a519aa11e400ab6267/diff:/var/lib/docker/overlay2/b95bd788e027776661c2c651c8f959a19599378c3c4684a9245894a5716a13cc/diff:/var/lib/docker/overlay2/9e7c46e593bb63e07a60f5facb56531bf110da8cb4132007a88ab2b89803e5ef/diff:/var/lib/docker/overlay2/784792fe6c0527307c94bd08a1bc3884daa5cd8f5071b4449ee89133c33654e6/diff:/var/lib/docker/overlay2/b4a29588f59f98cce9224c8308319074041a3da9da081d4f93abb22e075a9e4e/diff:/var/lib/docker/overlay2/41ac2d9cd94cdee20a2874e4ae41333b6643498d892474a1b21da0e2e1ac2c64/diff:/var/lib/docker/overlay2/710029ca84ee8702596c23901ff3fdf37e05e7201f24b35519955a0df40f0031/diff:/var/lib/docker/overlay2/0136f3d109e7a6c7eb5439c0fa48b4d308cd0ea7e57b90197bd7d2cd6be157e1/diff:/var/lib/docker/overlay2/4042488bff7a2d2cddd9f334b764da8953ac71b275fa6e3b8f63abfdb312956d/diff:/var/lib/docker/overlay2/0fc6cbd3b1ac7accf2126fa4e30b6866e54bd7448028852dd1673d447bd6f231/diff:/var/lib/docker/overlay2/7c0d162a511e55eba47b4a1a29210c75be1c44a9d43e406991776be6e55077c7/diff:/var/lib/d
ocker/overlay2/6388aee5034b1909871a20968d2275bc64a43e8bc80804c8f016914918a3671f/diff:/var/lib/docker/overlay2/a3e9911db9c988b0befd5f124931952ded665cb1c3d1a229dcdbb823540dc4b7/diff:/var/lib/docker/overlay2/ba8d93536baca55a79e4c0ba5fe6662c3a033d831758f9d2cd2cfb84cc7fe5ec/diff:/var/lib/docker/overlay2/f1e670bc371d1a1c837a48c16417264e623e9fe572d5d2fe58cd9d4955e0c0bc/diff:/var/lib/docker/overlay2/8c58cf54ee2e3204ae8babae1a2b618668923ffc498f4efae79e2d5bfaa97572/diff:/var/lib/docker/overlay2/b9a9478c74b24236a2089ddae16b53d73ea632a5180140c0a20e8d9aa4453c05/diff:/var/lib/docker/overlay2/00996894f317c0586a8a0a6ad78a1f63315330a7d7a29a7bf91bd80d5ed09d30/diff:/var/lib/docker/overlay2/bd4639363b4cd6ded443b529120dd8a8ee0da3b91148db40e029ecde140ff0a9/diff:/var/lib/docker/overlay2/7f7e4a1a1d1a6dd1bebd609b693dd0e517550d181ae77fe0986c71c84dcf4685/diff:/var/lib/docker/overlay2/6e128307ff461ea7ecba7b91c1ec5f632e271895beff9132fd8a4f6bba76019f/diff:/var/lib/docker/overlay2/c6f766eac66e04a846d891a39e6ea42ad10450edc70f26fe51339c72ba1
be7c6/diff:/var/lib/docker/overlay2/4558507f9a68b74bcc11b8c176e1194283564816c45ac2962f9e2705b5d9ea67/diff:/var/lib/docker/overlay2/17d819f679d4b86551d5747b0d01acb86e5babf6a1c483eecf46a164f02b65ff/diff:/var/lib/docker/overlay2/44cef1e92e1d0ce4476e31feb188e2be3f10070a729d553f1f4be9b33fad2373/diff",
	                "MergedDir": "/var/lib/docker/overlay2/73ca39f9acc5bff9f8e5983115bbe0afe64b59bba02056e1a6583678220d1cf4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/73ca39f9acc5bff9f8e5983115bbe0afe64b59bba02056e1a6583678220d1cf4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/73ca39f9acc5bff9f8e5983115bbe0afe64b59bba02056e1a6583678220d1cf4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-197513",
	                "Source": "/var/lib/docker/volumes/running-upgrade-197513/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-197513",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-197513",
	                "name.minikube.sigs.k8s.io": "running-upgrade-197513",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "51e7fbe67f06c868698f8c21610c09b4150f8c55cf28349f290bed93fc20ed37",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34550"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34549"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34548"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34547"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/51e7fbe67f06",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-197513": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.138"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "02016c1f52c9",
	                        "running-upgrade-197513"
	                    ],
	                    "NetworkID": "07c9d675ee93d9d5b422d7516cd5f082f8be197b3c9e1b394d8ab6c9fc6c0e12",
	                    "EndpointID": "4e1758dfb7bf89d1e1bcae438306e6500f0197a21271d5cb1ba92b52c477394d",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.138",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:8a",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-197513 -n running-upgrade-197513
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-197513 -n running-upgrade-197513: exit status 4 (586.139759ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0109 00:41:45.320365 1811670 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-197513" does not appear in /home/jenkins/minikube-integration/17830-1678586/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-197513" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-197513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-197513
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-197513: (2.79588861s)
--- FAIL: TestRunningBinaryUpgrade (114.77s)

                                                
                                    
x
+
TestMissingContainerUpgrade (147.01s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.4228592781.exe start -p missing-upgrade-632213 --memory=2200 --driver=docker  --container-runtime=crio
E0109 00:36:13.783909 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
E0109 00:36:55.256433 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.4228592781.exe start -p missing-upgrade-632213 --memory=2200 --driver=docker  --container-runtime=crio: (1m37.656711072s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-632213
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-632213: (1.794127644s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-632213
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-632213 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-632213 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (40.924744944s)

                                                
                                                
-- stdout --
	* [missing-upgrade-632213] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-1678586/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-1678586/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-632213 in cluster missing-upgrade-632213
	* Pulling base image v0.0.42-1704751654-17830 ...
	* docker "missing-upgrade-632213" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0109 00:37:34.574876 1788426 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:37:34.575057 1788426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:37:34.575063 1788426 out.go:309] Setting ErrFile to fd 2...
	I0109 00:37:34.575069 1788426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:37:34.575366 1788426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-1678586/.minikube/bin
	I0109 00:37:34.575765 1788426 out.go:303] Setting JSON to false
	I0109 00:37:34.576717 1788426 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":26397,"bootTime":1704734258,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0109 00:37:34.576793 1788426 start.go:138] virtualization:  
	I0109 00:37:34.580424 1788426 out.go:177] * [missing-upgrade-632213] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0109 00:37:34.582938 1788426 notify.go:220] Checking for updates...
	I0109 00:37:34.583595 1788426 out.go:177]   - MINIKUBE_LOCATION=17830
	I0109 00:37:34.585641 1788426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:37:34.587875 1788426 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-1678586/kubeconfig
	I0109 00:37:34.590018 1788426 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-1678586/.minikube
	I0109 00:37:34.591977 1788426 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0109 00:37:34.593936 1788426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0109 00:37:34.596562 1788426 config.go:182] Loaded profile config "missing-upgrade-632213": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0109 00:37:34.599629 1788426 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0109 00:37:34.601788 1788426 driver.go:392] Setting default libvirt URI to qemu:///system
	I0109 00:37:34.640671 1788426 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0109 00:37:34.640816 1788426 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0109 00:37:34.813328 1788426 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-09 00:37:34.800951332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0109 00:37:34.813422 1788426 docker.go:295] overlay module found
	I0109 00:37:34.816477 1788426 out.go:177] * Using the docker driver based on existing profile
	I0109 00:37:34.818605 1788426 start.go:298] selected driver: docker
	I0109 00:37:34.818622 1788426 start.go:902] validating driver "docker" against &{Name:missing-upgrade-632213 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-632213 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.150 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0109 00:37:34.818727 1788426 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0109 00:37:34.819330 1788426 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0109 00:37:34.947796 1788426 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-09 00:37:34.934901462 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0109 00:37:34.948150 1788426 cni.go:84] Creating CNI manager for ""
	I0109 00:37:34.948177 1788426 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0109 00:37:34.948189 1788426 start_flags.go:323] config:
	{Name:missing-upgrade-632213 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-632213 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.150 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0109 00:37:34.951411 1788426 out.go:177] * Starting control plane node missing-upgrade-632213 in cluster missing-upgrade-632213
	I0109 00:37:34.953744 1788426 cache.go:121] Beginning downloading kic base image for docker with crio
	I0109 00:37:34.956520 1788426 out.go:177] * Pulling base image v0.0.42-1704751654-17830 ...
	I0109 00:37:34.958748 1788426 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0109 00:37:34.958953 1788426 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0109 00:37:34.990245 1788426 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I0109 00:37:34.991174 1788426 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I0109 00:37:34.991830 1788426 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W0109 00:37:35.026014 1788426 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0109 00:37:35.026184 1788426 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/missing-upgrade-632213/config.json ...
	I0109 00:37:35.026568 1788426 cache.go:107] acquiring lock: {Name:mk3bff1da4c2c9d99b8d2eaa6644fd637ad4fc93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:37:35.026640 1788426 cache.go:115] /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0109 00:37:35.026652 1788426 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 86.006µs
	I0109 00:37:35.026661 1788426 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0109 00:37:35.026673 1788426 cache.go:107] acquiring lock: {Name:mk3b40c0c9f88bfd61767222d81202cf3e22a163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:37:35.026762 1788426 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I0109 00:37:35.026896 1788426 cache.go:107] acquiring lock: {Name:mk7ff1e446f4b29ae3f85102f468527aee0604ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:37:35.026997 1788426 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I0109 00:37:35.028630 1788426 cache.go:107] acquiring lock: {Name:mk822689e0c91f93c42f62581fcf619ccb20e1e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:37:35.028730 1788426 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0109 00:37:35.028817 1788426 cache.go:107] acquiring lock: {Name:mka42d349ef27271ebaa44714a66503b1199159c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:37:35.028884 1788426 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0109 00:37:35.028946 1788426 cache.go:107] acquiring lock: {Name:mkaa8567c2e8babe1144025b47a0b79e69be89fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:37:35.029001 1788426 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0109 00:37:35.029162 1788426 cache.go:107] acquiring lock: {Name:mk73c5240ae84c48665d067b63f65c779eef85b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:37:35.029223 1788426 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0109 00:37:35.029304 1788426 cache.go:107] acquiring lock: {Name:mk9c7f656405221a0cc8f00eef48a4312a5772a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:37:35.029363 1788426 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I0109 00:37:35.050414 1788426 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I0109 00:37:35.050701 1788426 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0109 00:37:35.050799 1788426 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I0109 00:37:35.050918 1788426 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0109 00:37:35.051423 1788426 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0109 00:37:35.051669 1788426 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I0109 00:37:35.052268 1788426 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0109 00:37:35.423755 1788426 cache.go:162] opening:  /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	I0109 00:37:35.432263 1788426 cache.go:162] opening:  /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0109 00:37:35.438364 1788426 cache.go:162] opening:  /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	W0109 00:37:35.455093 1788426 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I0109 00:37:35.455240 1788426 cache.go:162] opening:  /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	W0109 00:37:35.456666 1788426 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I0109 00:37:35.456759 1788426 cache.go:162] opening:  /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	I0109 00:37:35.458391 1788426 cache.go:162] opening:  /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	W0109 00:37:35.466279 1788426 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I0109 00:37:35.466393 1788426 cache.go:162] opening:  /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	I0109 00:37:35.552345 1788426 cache.go:157] /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0109 00:37:35.552380 1788426 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 523.749587ms
	I0109 00:37:35.552393 1788426 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  81.28 KiB / 287.99 MiB [>] 0.03% ? p/s ?I0109 00:37:35.920030 1788426 cache.go:157] /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0109 00:37:35.920062 1788426 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 890.758048ms
	I0109 00:37:35.920076 1788426 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0109 00:37:35.935706 1788426 cache.go:157] /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0109 00:37:35.935777 1788426 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 906.830607ms
	I0109 00:37:35.935804 1788426 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  1.41 MiB / 287.99 MiB [>_] 0.49% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  4.82 MiB / 287.99 MiB  1.67% 7.99 MiB p/I0109 00:37:36.292344 1788426 cache.go:157] /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0109 00:37:36.292413 1788426 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.265741497s
	I0109 00:37:36.292447 1788426 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  8.82 MiB / 287.99 MiB  3.06% 7.99 MiB p/    > gcr.io/k8s-minikube/kicbase...:  15.57 MiB / 287.99 MiB  5.41% 7.99 MiB p    > gcr.io/k8s-minikube/kicbase...:  23.07 MiB / 287.99 MiB  8.01% 9.44 MiB pI0109 00:37:36.862234 1788426 cache.go:157] /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0109 00:37:36.862310 1788426 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 1.833147072s
	I0109 00:37:36.862338 1788426 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0109 00:37:36.979220 1788426 cache.go:157] /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0109 00:37:36.979298 1788426 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 1.952413684s
	I0109 00:37:36.979326 1788426 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 9.44 MiB p    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 9.44 MiB p    > gcr.io/k8s-minikube/kicbase...:  25.94 MiB / 287.99 MiB  9.01% 9.14 MiB p    > gcr.io/k8s-minikube/kicbase...:  38.92 MiB / 287.99 MiB  13.52% 9.14 MiB     > gcr.io/k8s-minikube/kicbase...:  43.87 MiB / 287.99 MiB  15.23% 9.14 MiB     > gcr.io/k8s-minikube/kicbase...:  57.12 MiB / 287.99 MiB  19.83% 11.91 MiB    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 11.91 MiB    > gcr.io/k8s-minikube/kicbase...:  68.81 MiB / 287.99 MiB  23.89% 11.91 MiB    > gcr.io/k8s-minikube/kicbase...:  81.68 MiB / 287.99 MiB  28.36% 13.78 MiB    > gcr.io/k8s-minikube/kicbase...:  95.25 MiB / 287.99 MiB  33.07% 13.78 MiB    > gcr.io/k8s-minikube/kicbase...:  107.60 MiB / 287.99 MiB  37.36% 13.78 Mi    > gcr.io/k8s-minikube/kicbase...:  114.54 MiB / 287.99 MiB  39.77% 16.42 MiI0109 00:37:39.320948 1788426 cache.go:157] /home/jenkins/minikube-
integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0109 00:37:39.320975 1788426 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 4.292158052s
	I0109 00:37:39.321002 1788426 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0109 00:37:39.321016 1788426 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  130.41 MiB / 287.99 MiB  45.28% 16.42 Mi    > gcr.io/k8s-minikube/kicbase...:  141.97 MiB / 287.99 MiB  49.30% 16.42 Mi    > gcr.io/k8s-minikube/kicbase...:  158.97 MiB / 287.99 MiB  55.20% 20.14 Mi    > gcr.io/k8s-minikube/kicbase...:  169.87 MiB / 287.99 MiB  58.98% 20.14 Mi    > gcr.io/k8s-minikube/kicbase...:  172.57 MiB / 287.99 MiB  59.92% 20.14 Mi    > gcr.io/k8s-minikube/kicbase...:  187.02 MiB / 287.99 MiB  64.94% 21.86 Mi    > gcr.io/k8s-minikube/kicbase...:  197.54 MiB / 287.99 MiB  68.59% 21.86 Mi    > gcr.io/k8s-minikube/kicbase...:  209.68 MiB / 287.99 MiB  72.81% 21.86 Mi    > gcr.io/k8s-minikube/kicbase...:  217.55 MiB / 287.99 MiB  75.54% 23.73 Mi    > gcr.io/k8s-minikube/kicbase...:  229.74 MiB / 287.99 MiB  79.77% 23.73 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 23.73 Mi    > gcr.io/k8s-minikube/kicbase...:  239.85 MiB / 287.99 MiB  83.29% 24.60 Mi    > gcr.io/k8s-minikube/kicbase...:  250.23 MiB / 287.99 MiB  86.
89% 24.60 Mi    > gcr.io/k8s-minikube/kicbase...:  258.04 MiB / 287.99 MiB  89.60% 24.60 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB  92.03% 25.72 Mi    > gcr.io/k8s-minikube/kicbase...:  271.88 MiB / 287.99 MiB  94.41% 25.72 Mi    > gcr.io/k8s-minikube/kicbase...:  282.19 MiB / 287.99 MiB  97.99% 25.72 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 26.53 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 26.53 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 26.53 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 24.81 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 24.81 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 24.81 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 23.22 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 33.61 MI0109 00:37:44.164446 1788426 cache.go:152] successfully saved g
cr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I0109 00:37:44.164458 1788426 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I0109 00:37:45.280333 1788426 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I0109 00:37:45.280379 1788426 cache.go:194] Successfully downloaded all kic artifacts
	I0109 00:37:45.280438 1788426 start.go:365] acquiring machines lock for missing-upgrade-632213: {Name:mkbed4e17545835a0305d2a56f857fde5c6c854a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:37:45.280516 1788426 start.go:369] acquired machines lock for "missing-upgrade-632213" in 55.435µs
	I0109 00:37:45.280543 1788426 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:37:45.280555 1788426 fix.go:54] fixHost starting: 
	I0109 00:37:45.280835 1788426 cli_runner.go:164] Run: docker container inspect missing-upgrade-632213 --format={{.State.Status}}
	W0109 00:37:45.304400 1788426 cli_runner.go:211] docker container inspect missing-upgrade-632213 --format={{.State.Status}} returned with exit code 1
	I0109 00:37:45.304745 1788426 fix.go:102] recreateIfNeeded on missing-upgrade-632213: state= err=unknown state "missing-upgrade-632213": docker container inspect missing-upgrade-632213 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-632213
	I0109 00:37:45.304778 1788426 fix.go:107] machineExists: false. err=machine does not exist
	I0109 00:37:45.309181 1788426 out.go:177] * docker "missing-upgrade-632213" container is missing, will recreate.
	I0109 00:37:45.311331 1788426 delete.go:124] DEMOLISHING missing-upgrade-632213 ...
	I0109 00:37:45.311428 1788426 cli_runner.go:164] Run: docker container inspect missing-upgrade-632213 --format={{.State.Status}}
	W0109 00:37:45.329790 1788426 cli_runner.go:211] docker container inspect missing-upgrade-632213 --format={{.State.Status}} returned with exit code 1
	W0109 00:37:45.329850 1788426 stop.go:75] unable to get state: unknown state "missing-upgrade-632213": docker container inspect missing-upgrade-632213 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-632213
	I0109 00:37:45.329872 1788426 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-632213": docker container inspect missing-upgrade-632213 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-632213
	I0109 00:37:45.330332 1788426 cli_runner.go:164] Run: docker container inspect missing-upgrade-632213 --format={{.State.Status}}
	W0109 00:37:45.347479 1788426 cli_runner.go:211] docker container inspect missing-upgrade-632213 --format={{.State.Status}} returned with exit code 1
	I0109 00:37:45.347550 1788426 delete.go:82] Unable to get host status for missing-upgrade-632213, assuming it has already been deleted: state: unknown state "missing-upgrade-632213": docker container inspect missing-upgrade-632213 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-632213
	I0109 00:37:45.347616 1788426 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-632213
	W0109 00:37:45.363723 1788426 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-632213 returned with exit code 1
	I0109 00:37:45.363759 1788426 kic.go:371] could not find the container missing-upgrade-632213 to remove it. will try anyways
	I0109 00:37:45.363815 1788426 cli_runner.go:164] Run: docker container inspect missing-upgrade-632213 --format={{.State.Status}}
	W0109 00:37:45.380313 1788426 cli_runner.go:211] docker container inspect missing-upgrade-632213 --format={{.State.Status}} returned with exit code 1
	W0109 00:37:45.380370 1788426 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-632213": docker container inspect missing-upgrade-632213 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-632213
	I0109 00:37:45.380439 1788426 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-632213 /bin/bash -c "sudo init 0"
	W0109 00:37:45.396585 1788426 cli_runner.go:211] docker exec --privileged -t missing-upgrade-632213 /bin/bash -c "sudo init 0" returned with exit code 1
	I0109 00:37:45.397062 1788426 oci.go:650] error shutdown missing-upgrade-632213: docker exec --privileged -t missing-upgrade-632213 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-632213
	I0109 00:37:46.398057 1788426 cli_runner.go:164] Run: docker container inspect missing-upgrade-632213 --format={{.State.Status}}
	W0109 00:37:46.414494 1788426 cli_runner.go:211] docker container inspect missing-upgrade-632213 --format={{.State.Status}} returned with exit code 1
	I0109 00:37:46.414595 1788426 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-632213": docker container inspect missing-upgrade-632213 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-632213
	I0109 00:37:46.414609 1788426 oci.go:664] temporary error: container missing-upgrade-632213 status is  but expect it to be exited
	I0109 00:37:46.414641 1788426 retry.go:31] will retry after 608.668573ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-632213": docker container inspect missing-upgrade-632213 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-632213
	I0109 00:37:47.024131 1788426 cli_runner.go:164] Run: docker container inspect missing-upgrade-632213 --format={{.State.Status}}
	W0109 00:37:47.041898 1788426 cli_runner.go:211] docker container inspect missing-upgrade-632213 --format={{.State.Status}} returned with exit code 1
	I0109 00:37:47.041967 1788426 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-632213": docker container inspect missing-upgrade-632213 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-632213
	I0109 00:37:47.041981 1788426 oci.go:664] temporary error: container missing-upgrade-632213 status is  but expect it to be exited
	I0109 00:37:47.042007 1788426 retry.go:31] will retry after 523.791482ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-632213": docker container inspect missing-upgrade-632213 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-632213
	I0109 00:37:47.566764 1788426 cli_runner.go:164] Run: docker container inspect missing-upgrade-632213 --format={{.State.Status}}
	W0109 00:37:47.584087 1788426 cli_runner.go:211] docker container inspect missing-upgrade-632213 --format={{.State.Status}} returned with exit code 1
	I0109 00:37:47.584155 1788426 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-632213": docker container inspect missing-upgrade-632213 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-632213
	I0109 00:37:47.584169 1788426 oci.go:664] temporary error: container missing-upgrade-632213 status is  but expect it to be exited
	I0109 00:37:47.584196 1788426 retry.go:31] will retry after 765.324697ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-632213": docker container inspect missing-upgrade-632213 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-632213
	I0109 00:37:48.349772 1788426 cli_runner.go:164] Run: docker container inspect missing-upgrade-632213 --format={{.State.Status}}
	W0109 00:37:48.368141 1788426 cli_runner.go:211] docker container inspect missing-upgrade-632213 --format={{.State.Status}} returned with exit code 1
	I0109 00:37:48.368206 1788426 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-632213": docker container inspect missing-upgrade-632213 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-632213
	I0109 00:37:48.368218 1788426 oci.go:664] temporary error: container missing-upgrade-632213 status is  but expect it to be exited
	I0109 00:37:48.368245 1788426 retry.go:31] will retry after 1.423754464s: couldn't verify container is exited. %v: unknown state "missing-upgrade-632213": docker container inspect missing-upgrade-632213 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-632213
	I0109 00:37:49.792439 1788426 cli_runner.go:164] Run: docker container inspect missing-upgrade-632213 --format={{.State.Status}}
	W0109 00:37:49.808977 1788426 cli_runner.go:211] docker container inspect missing-upgrade-632213 --format={{.State.Status}} returned with exit code 1
	I0109 00:37:49.809043 1788426 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-632213": docker container inspect missing-upgrade-632213 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-632213
	I0109 00:37:49.809058 1788426 oci.go:664] temporary error: container missing-upgrade-632213 status is  but expect it to be exited
	I0109 00:37:49.809085 1788426 retry.go:31] will retry after 3.064983682s: couldn't verify container is exited. %v: unknown state "missing-upgrade-632213": docker container inspect missing-upgrade-632213 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-632213
	I0109 00:37:52.874589 1788426 cli_runner.go:164] Run: docker container inspect missing-upgrade-632213 --format={{.State.Status}}
	W0109 00:37:52.920677 1788426 cli_runner.go:211] docker container inspect missing-upgrade-632213 --format={{.State.Status}} returned with exit code 1
	I0109 00:37:52.920737 1788426 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-632213": docker container inspect missing-upgrade-632213 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-632213
	I0109 00:37:52.920746 1788426 oci.go:664] temporary error: container missing-upgrade-632213 status is  but expect it to be exited
	I0109 00:37:52.920770 1788426 retry.go:31] will retry after 3.507142337s: couldn't verify container is exited. %v: unknown state "missing-upgrade-632213": docker container inspect missing-upgrade-632213 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-632213
	I0109 00:37:56.428224 1788426 cli_runner.go:164] Run: docker container inspect missing-upgrade-632213 --format={{.State.Status}}
	W0109 00:37:56.448288 1788426 cli_runner.go:211] docker container inspect missing-upgrade-632213 --format={{.State.Status}} returned with exit code 1
	I0109 00:37:56.448373 1788426 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-632213": docker container inspect missing-upgrade-632213 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-632213
	I0109 00:37:56.448393 1788426 oci.go:664] temporary error: container missing-upgrade-632213 status is  but expect it to be exited
	I0109 00:37:56.448418 1788426 retry.go:31] will retry after 5.342204149s: couldn't verify container is exited. %v: unknown state "missing-upgrade-632213": docker container inspect missing-upgrade-632213 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-632213
	I0109 00:38:01.790987 1788426 cli_runner.go:164] Run: docker container inspect missing-upgrade-632213 --format={{.State.Status}}
	W0109 00:38:01.813348 1788426 cli_runner.go:211] docker container inspect missing-upgrade-632213 --format={{.State.Status}} returned with exit code 1
	I0109 00:38:01.813410 1788426 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-632213": docker container inspect missing-upgrade-632213 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-632213
	I0109 00:38:01.813419 1788426 oci.go:664] temporary error: container missing-upgrade-632213 status is  but expect it to be exited
	I0109 00:38:01.813451 1788426 oci.go:88] couldn't shut down missing-upgrade-632213 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-632213": docker container inspect missing-upgrade-632213 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-632213
	 
	I0109 00:38:01.813526 1788426 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-632213
	I0109 00:38:01.836219 1788426 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-632213
	W0109 00:38:01.853266 1788426 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-632213 returned with exit code 1
	I0109 00:38:01.853388 1788426 cli_runner.go:164] Run: docker network inspect missing-upgrade-632213 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0109 00:38:01.873882 1788426 cli_runner.go:164] Run: docker network rm missing-upgrade-632213
	I0109 00:38:02.012190 1788426 fix.go:114] Sleeping 1 second for extra luck!
	I0109 00:38:03.012325 1788426 start.go:125] createHost starting for "" (driver="docker")
	I0109 00:38:03.023783 1788426 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0109 00:38:03.023948 1788426 start.go:159] libmachine.API.Create for "missing-upgrade-632213" (driver="docker")
	I0109 00:38:03.023979 1788426 client.go:168] LocalClient.Create starting
	I0109 00:38:03.024057 1788426 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem
	I0109 00:38:03.024131 1788426 main.go:141] libmachine: Decoding PEM data...
	I0109 00:38:03.024152 1788426 main.go:141] libmachine: Parsing certificate...
	I0109 00:38:03.024216 1788426 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem
	I0109 00:38:03.024239 1788426 main.go:141] libmachine: Decoding PEM data...
	I0109 00:38:03.024257 1788426 main.go:141] libmachine: Parsing certificate...
	I0109 00:38:03.024948 1788426 cli_runner.go:164] Run: docker network inspect missing-upgrade-632213 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0109 00:38:03.047292 1788426 cli_runner.go:211] docker network inspect missing-upgrade-632213 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0109 00:38:03.047367 1788426 network_create.go:281] running [docker network inspect missing-upgrade-632213] to gather additional debugging logs...
	I0109 00:38:03.047386 1788426 cli_runner.go:164] Run: docker network inspect missing-upgrade-632213
	W0109 00:38:03.077389 1788426 cli_runner.go:211] docker network inspect missing-upgrade-632213 returned with exit code 1
	I0109 00:38:03.077419 1788426 network_create.go:284] error running [docker network inspect missing-upgrade-632213]: docker network inspect missing-upgrade-632213: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-632213 not found
	I0109 00:38:03.077438 1788426 network_create.go:286] output of [docker network inspect missing-upgrade-632213]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-632213 not found
	
	** /stderr **
	I0109 00:38:03.077542 1788426 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0109 00:38:03.135835 1788426 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-105ffd575afe IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d2:7c:7b:ae} reservation:<nil>}
	I0109 00:38:03.136218 1788426 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-65d7500bf19c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:d3:cd:64:67} reservation:<nil>}
	I0109 00:38:03.136552 1788426 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-336f241f60a1 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:0f:4b:44:4c} reservation:<nil>}
	I0109 00:38:03.136996 1788426 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002949440}
	I0109 00:38:03.137017 1788426 network_create.go:124] attempt to create docker network missing-upgrade-632213 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0109 00:38:03.137093 1788426 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-632213 missing-upgrade-632213
	I0109 00:38:03.279643 1788426 network_create.go:108] docker network missing-upgrade-632213 192.168.76.0/24 created
	I0109 00:38:03.279681 1788426 kic.go:121] calculated static IP "192.168.76.2" for the "missing-upgrade-632213" container
	I0109 00:38:03.279764 1788426 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0109 00:38:03.298171 1788426 cli_runner.go:164] Run: docker volume create missing-upgrade-632213 --label name.minikube.sigs.k8s.io=missing-upgrade-632213 --label created_by.minikube.sigs.k8s.io=true
	I0109 00:38:03.319807 1788426 oci.go:103] Successfully created a docker volume missing-upgrade-632213
	I0109 00:38:03.319910 1788426 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-632213-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-632213 --entrypoint /usr/bin/test -v missing-upgrade-632213:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I0109 00:38:05.736534 1788426 cli_runner.go:217] Completed: docker run --rm --name missing-upgrade-632213-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-632213 --entrypoint /usr/bin/test -v missing-upgrade-632213:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib: (2.416571764s)
	I0109 00:38:05.736565 1788426 oci.go:107] Successfully prepared a docker volume missing-upgrade-632213
	I0109 00:38:05.736590 1788426 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W0109 00:38:05.736732 1788426 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0109 00:38:05.736834 1788426 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0109 00:38:05.830567 1788426 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-632213 --name missing-upgrade-632213 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-632213 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-632213 --network missing-upgrade-632213 --ip 192.168.76.2 --volume missing-upgrade-632213:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I0109 00:38:06.477703 1788426 cli_runner.go:164] Run: docker container inspect missing-upgrade-632213 --format={{.State.Running}}
	I0109 00:38:06.534122 1788426 cli_runner.go:164] Run: docker container inspect missing-upgrade-632213 --format={{.State.Status}}
	I0109 00:38:06.586558 1788426 cli_runner.go:164] Run: docker exec missing-upgrade-632213 stat /var/lib/dpkg/alternatives/iptables
	I0109 00:38:06.703442 1788426 oci.go:144] the created container "missing-upgrade-632213" has a running status.
	I0109 00:38:06.703470 1788426 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/missing-upgrade-632213/id_rsa...
	I0109 00:38:06.985613 1788426 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/missing-upgrade-632213/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0109 00:38:07.021709 1788426 cli_runner.go:164] Run: docker container inspect missing-upgrade-632213 --format={{.State.Status}}
	I0109 00:38:07.057209 1788426 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0109 00:38:07.057232 1788426 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-632213 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0109 00:38:07.161837 1788426 cli_runner.go:164] Run: docker container inspect missing-upgrade-632213 --format={{.State.Status}}
	I0109 00:38:07.218480 1788426 machine.go:88] provisioning docker machine ...
	I0109 00:38:07.218512 1788426 ubuntu.go:169] provisioning hostname "missing-upgrade-632213"
	I0109 00:38:07.218579 1788426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-632213
	I0109 00:38:07.278310 1788426 main.go:141] libmachine: Using SSH client type: native
	I0109 00:38:07.278742 1788426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34528 <nil> <nil>}
	I0109 00:38:07.278756 1788426 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-632213 && echo "missing-upgrade-632213" | sudo tee /etc/hostname
	I0109 00:38:07.279397 1788426 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40356->127.0.0.1:34528: read: connection reset by peer
	I0109 00:38:10.437096 1788426 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-632213
	
	I0109 00:38:10.437259 1788426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-632213
	I0109 00:38:10.459555 1788426 main.go:141] libmachine: Using SSH client type: native
	I0109 00:38:10.459954 1788426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34528 <nil> <nil>}
	I0109 00:38:10.459971 1788426 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-632213' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-632213/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-632213' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:38:10.599618 1788426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:38:10.599650 1788426 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17830-1678586/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-1678586/.minikube}
	I0109 00:38:10.599668 1788426 ubuntu.go:177] setting up certificates
	I0109 00:38:10.599677 1788426 provision.go:83] configureAuth start
	I0109 00:38:10.599734 1788426 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-632213
	I0109 00:38:10.624264 1788426 provision.go:138] copyHostCerts
	I0109 00:38:10.624322 1788426 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem, removing ...
	I0109 00:38:10.624332 1788426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem
	I0109 00:38:10.624393 1788426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem (1082 bytes)
	I0109 00:38:10.624478 1788426 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem, removing ...
	I0109 00:38:10.624483 1788426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem
	I0109 00:38:10.624502 1788426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem (1123 bytes)
	I0109 00:38:10.624554 1788426 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem, removing ...
	I0109 00:38:10.624558 1788426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem
	I0109 00:38:10.624576 1788426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem (1679 bytes)
	I0109 00:38:10.624618 1788426 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-632213 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-632213]
	I0109 00:38:10.972058 1788426 provision.go:172] copyRemoteCerts
	I0109 00:38:10.972148 1788426 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:38:10.972196 1788426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-632213
	I0109 00:38:11.002905 1788426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34528 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/missing-upgrade-632213/id_rsa Username:docker}
	I0109 00:38:11.106423 1788426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:38:11.138935 1788426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0109 00:38:11.163674 1788426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0109 00:38:11.190867 1788426 provision.go:86] duration metric: configureAuth took 591.176081ms
	I0109 00:38:11.190902 1788426 ubuntu.go:193] setting minikube options for container-runtime
	I0109 00:38:11.191083 1788426 config.go:182] Loaded profile config "missing-upgrade-632213": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0109 00:38:11.191183 1788426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-632213
	I0109 00:38:11.234874 1788426 main.go:141] libmachine: Using SSH client type: native
	I0109 00:38:11.235309 1788426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34528 <nil> <nil>}
	I0109 00:38:11.235325 1788426 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:38:11.698836 1788426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:38:11.698857 1788426 machine.go:91] provisioned docker machine in 4.480355089s
	I0109 00:38:11.698866 1788426 client.go:171] LocalClient.Create took 8.674877435s
	I0109 00:38:11.698887 1788426 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-632213" took 8.674932435s
	I0109 00:38:11.698896 1788426 start.go:300] post-start starting for "missing-upgrade-632213" (driver="docker")
	I0109 00:38:11.698906 1788426 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:38:11.698966 1788426 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:38:11.699010 1788426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-632213
	I0109 00:38:11.722426 1788426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34528 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/missing-upgrade-632213/id_rsa Username:docker}
	I0109 00:38:11.827677 1788426 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:38:11.832302 1788426 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0109 00:38:11.832366 1788426 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0109 00:38:11.832401 1788426 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0109 00:38:11.832420 1788426 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0109 00:38:11.832462 1788426 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-1678586/.minikube/addons for local assets ...
	I0109 00:38:11.832539 1788426 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-1678586/.minikube/files for local assets ...
	I0109 00:38:11.832653 1788426 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem -> 16839672.pem in /etc/ssl/certs
	I0109 00:38:11.832802 1788426 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:38:11.841481 1788426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem --> /etc/ssl/certs/16839672.pem (1708 bytes)
	I0109 00:38:11.866520 1788426 start.go:303] post-start completed in 167.609847ms
	I0109 00:38:11.867507 1788426 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-632213
	I0109 00:38:11.887621 1788426 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/missing-upgrade-632213/config.json ...
	I0109 00:38:11.887894 1788426 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0109 00:38:11.887949 1788426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-632213
	I0109 00:38:11.917489 1788426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34528 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/missing-upgrade-632213/id_rsa Username:docker}
	I0109 00:38:12.014844 1788426 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0109 00:38:12.024848 1788426 start.go:128] duration metric: createHost completed in 9.012487083s
	I0109 00:38:12.024958 1788426 cli_runner.go:164] Run: docker container inspect missing-upgrade-632213 --format={{.State.Status}}
	W0109 00:38:12.052762 1788426 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:38:12.052787 1788426 machine.go:88] provisioning docker machine ...
	I0109 00:38:12.052805 1788426 ubuntu.go:169] provisioning hostname "missing-upgrade-632213"
	I0109 00:38:12.052872 1788426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-632213
	I0109 00:38:12.072684 1788426 main.go:141] libmachine: Using SSH client type: native
	I0109 00:38:12.073100 1788426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34528 <nil> <nil>}
	I0109 00:38:12.073114 1788426 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-632213 && echo "missing-upgrade-632213" | sudo tee /etc/hostname
	I0109 00:38:12.252848 1788426 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-632213
	
	I0109 00:38:12.252936 1788426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-632213
	I0109 00:38:12.281526 1788426 main.go:141] libmachine: Using SSH client type: native
	I0109 00:38:12.281936 1788426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34528 <nil> <nil>}
	I0109 00:38:12.281955 1788426 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-632213' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-632213/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-632213' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:38:12.431491 1788426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:38:12.431517 1788426 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17830-1678586/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-1678586/.minikube}
	I0109 00:38:12.431540 1788426 ubuntu.go:177] setting up certificates
	I0109 00:38:12.431549 1788426 provision.go:83] configureAuth start
	I0109 00:38:12.431608 1788426 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-632213
	I0109 00:38:12.472038 1788426 provision.go:138] copyHostCerts
	I0109 00:38:12.472106 1788426 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem, removing ...
	I0109 00:38:12.472119 1788426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem
	I0109 00:38:12.472206 1788426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem (1082 bytes)
	I0109 00:38:12.472332 1788426 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem, removing ...
	I0109 00:38:12.472344 1788426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem
	I0109 00:38:12.472392 1788426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem (1123 bytes)
	I0109 00:38:12.472477 1788426 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem, removing ...
	I0109 00:38:12.472486 1788426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem
	I0109 00:38:12.472516 1788426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem (1679 bytes)
	I0109 00:38:12.472579 1788426 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-632213 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-632213]
	I0109 00:38:13.061130 1788426 provision.go:172] copyRemoteCerts
	I0109 00:38:13.061242 1788426 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:38:13.061333 1788426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-632213
	I0109 00:38:13.101063 1788426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34528 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/missing-upgrade-632213/id_rsa Username:docker}
	I0109 00:38:13.227124 1788426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:38:13.273410 1788426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0109 00:38:13.316974 1788426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0109 00:38:13.380374 1788426 provision.go:86] duration metric: configureAuth took 948.811914ms
	I0109 00:38:13.380398 1788426 ubuntu.go:193] setting minikube options for container-runtime
	I0109 00:38:13.380572 1788426 config.go:182] Loaded profile config "missing-upgrade-632213": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0109 00:38:13.380687 1788426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-632213
	I0109 00:38:13.416412 1788426 main.go:141] libmachine: Using SSH client type: native
	I0109 00:38:13.417332 1788426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34528 <nil> <nil>}
	I0109 00:38:13.417357 1788426 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:38:13.918831 1788426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:38:13.918853 1788426 machine.go:91] provisioned docker machine in 1.866057644s
	I0109 00:38:13.918863 1788426 start.go:300] post-start starting for "missing-upgrade-632213" (driver="docker")
	I0109 00:38:13.918876 1788426 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:38:13.918957 1788426 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:38:13.919005 1788426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-632213
	I0109 00:38:13.955663 1788426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34528 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/missing-upgrade-632213/id_rsa Username:docker}
	I0109 00:38:14.061917 1788426 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:38:14.070375 1788426 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0109 00:38:14.070399 1788426 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0109 00:38:14.070410 1788426 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0109 00:38:14.070417 1788426 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0109 00:38:14.070428 1788426 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-1678586/.minikube/addons for local assets ...
	I0109 00:38:14.070502 1788426 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-1678586/.minikube/files for local assets ...
	I0109 00:38:14.070578 1788426 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem -> 16839672.pem in /etc/ssl/certs
	I0109 00:38:14.070687 1788426 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:38:14.087574 1788426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem --> /etc/ssl/certs/16839672.pem (1708 bytes)
	I0109 00:38:14.120854 1788426 start.go:303] post-start completed in 201.974293ms
	I0109 00:38:14.120934 1788426 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0109 00:38:14.120989 1788426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-632213
	I0109 00:38:14.147932 1788426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34528 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/missing-upgrade-632213/id_rsa Username:docker}
	I0109 00:38:14.248318 1788426 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0109 00:38:14.254523 1788426 fix.go:56] fixHost completed within 28.973963036s
	I0109 00:38:14.254545 1788426 start.go:83] releasing machines lock for "missing-upgrade-632213", held for 28.974016615s
	I0109 00:38:14.254625 1788426 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-632213
	I0109 00:38:14.286475 1788426 ssh_runner.go:195] Run: cat /version.json
	I0109 00:38:14.286509 1788426 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:38:14.286535 1788426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-632213
	I0109 00:38:14.286583 1788426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-632213
	I0109 00:38:14.362242 1788426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34528 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/missing-upgrade-632213/id_rsa Username:docker}
	I0109 00:38:14.378630 1788426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34528 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/missing-upgrade-632213/id_rsa Username:docker}
	W0109 00:38:14.483397 1788426 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0109 00:38:14.483537 1788426 ssh_runner.go:195] Run: systemctl --version
	I0109 00:38:14.558687 1788426 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:38:14.671043 1788426 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0109 00:38:14.676816 1788426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:38:14.705385 1788426 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0109 00:38:14.706503 1788426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:38:14.750497 1788426 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:38:14.750519 1788426 start.go:475] detecting cgroup driver to use...
	I0109 00:38:14.750552 1788426 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0109 00:38:14.750610 1788426 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:38:14.789957 1788426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:38:14.802210 1788426 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:38:14.802278 1788426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:38:14.814198 1788426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:38:14.826342 1788426 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0109 00:38:14.839761 1788426 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0109 00:38:14.839824 1788426 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:38:14.999572 1788426 docker.go:219] disabling docker service ...
	I0109 00:38:14.999643 1788426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:38:15.015199 1788426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:38:15.029913 1788426 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:38:15.170765 1788426 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:38:15.302999 1788426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:38:15.316316 1788426 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:38:15.336143 1788426 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0109 00:38:15.336256 1788426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:38:15.351098 1788426 out.go:177] 
	W0109 00:38:15.353194 1788426 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0109 00:38:15.353321 1788426 out.go:239] * 
	* 
	W0109 00:38:15.355001 1788426 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0109 00:38:15.357596 1788426 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:344: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-632213 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:346: *** TestMissingContainerUpgrade FAILED at 2024-01-09 00:38:15.397952626 +0000 UTC m=+2260.947578420
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-632213
helpers_test.go:235: (dbg) docker inspect missing-upgrade-632213:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f821810939c08d73e7e49f80dd325d5f80b2e0054952d7d666a27810fa93c4a1",
	        "Created": "2024-01-09T00:38:05.85080076Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1790685,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-09T00:38:06.464308696Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/f821810939c08d73e7e49f80dd325d5f80b2e0054952d7d666a27810fa93c4a1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f821810939c08d73e7e49f80dd325d5f80b2e0054952d7d666a27810fa93c4a1/hostname",
	        "HostsPath": "/var/lib/docker/containers/f821810939c08d73e7e49f80dd325d5f80b2e0054952d7d666a27810fa93c4a1/hosts",
	        "LogPath": "/var/lib/docker/containers/f821810939c08d73e7e49f80dd325d5f80b2e0054952d7d666a27810fa93c4a1/f821810939c08d73e7e49f80dd325d5f80b2e0054952d7d666a27810fa93c4a1-json.log",
	        "Name": "/missing-upgrade-632213",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-632213:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-632213",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7acb01e80bdde75329387f2525aa714ec90dfac3c5184a5a92e2b26b4814383a-init/diff:/var/lib/docker/overlay2/75fa33501ce61322980c65cbf192cfe5fd35ca9fddab21c6d5df3acedd4c553d/diff:/var/lib/docker/overlay2/fdcc533a40e33c4ae5141da0c43121649c1de9c83b74faf64c08ef85efc0f59d/diff:/var/lib/docker/overlay2/a43f6594bf3a7d7a74c780867e9906d9d774f86b34d9877aa130fd29b40cc2f6/diff:/var/lib/docker/overlay2/9a6d0260ef788b8b7d620b864576362c38f3351482cc38fc1910e486d4eef11d/diff:/var/lib/docker/overlay2/aa2063bac13b26dae106703ad8f3dfd98e5acd8b3fa61f2f7b20af443c9e8b1e/diff:/var/lib/docker/overlay2/553a50faee1d88367ad4e288189c2e4457325958b5f0cf25448868d87b183482/diff:/var/lib/docker/overlay2/d6c61ec29316c1ab37f6b103a2e5a5a72b0ba1189d0a241baff596d3d18aefb8/diff:/var/lib/docker/overlay2/ded69932de0adb7126a0ca8648dc404b81a82d630b145cfc20a860f97a5eceb7/diff:/var/lib/docker/overlay2/42aecde3d15f474dbb1d83651289283b275c5cabc22f99d24a33b9593a017e8c/diff:/var/lib/docker/overlay2/24f9a7
f492301a9bfaa0bec272fd999ae7e89078cf4a48a519aa11e400ab6267/diff:/var/lib/docker/overlay2/b95bd788e027776661c2c651c8f959a19599378c3c4684a9245894a5716a13cc/diff:/var/lib/docker/overlay2/9e7c46e593bb63e07a60f5facb56531bf110da8cb4132007a88ab2b89803e5ef/diff:/var/lib/docker/overlay2/784792fe6c0527307c94bd08a1bc3884daa5cd8f5071b4449ee89133c33654e6/diff:/var/lib/docker/overlay2/b4a29588f59f98cce9224c8308319074041a3da9da081d4f93abb22e075a9e4e/diff:/var/lib/docker/overlay2/41ac2d9cd94cdee20a2874e4ae41333b6643498d892474a1b21da0e2e1ac2c64/diff:/var/lib/docker/overlay2/710029ca84ee8702596c23901ff3fdf37e05e7201f24b35519955a0df40f0031/diff:/var/lib/docker/overlay2/0136f3d109e7a6c7eb5439c0fa48b4d308cd0ea7e57b90197bd7d2cd6be157e1/diff:/var/lib/docker/overlay2/4042488bff7a2d2cddd9f334b764da8953ac71b275fa6e3b8f63abfdb312956d/diff:/var/lib/docker/overlay2/0fc6cbd3b1ac7accf2126fa4e30b6866e54bd7448028852dd1673d447bd6f231/diff:/var/lib/docker/overlay2/7c0d162a511e55eba47b4a1a29210c75be1c44a9d43e406991776be6e55077c7/diff:/var/lib/d
ocker/overlay2/6388aee5034b1909871a20968d2275bc64a43e8bc80804c8f016914918a3671f/diff:/var/lib/docker/overlay2/a3e9911db9c988b0befd5f124931952ded665cb1c3d1a229dcdbb823540dc4b7/diff:/var/lib/docker/overlay2/ba8d93536baca55a79e4c0ba5fe6662c3a033d831758f9d2cd2cfb84cc7fe5ec/diff:/var/lib/docker/overlay2/f1e670bc371d1a1c837a48c16417264e623e9fe572d5d2fe58cd9d4955e0c0bc/diff:/var/lib/docker/overlay2/8c58cf54ee2e3204ae8babae1a2b618668923ffc498f4efae79e2d5bfaa97572/diff:/var/lib/docker/overlay2/b9a9478c74b24236a2089ddae16b53d73ea632a5180140c0a20e8d9aa4453c05/diff:/var/lib/docker/overlay2/00996894f317c0586a8a0a6ad78a1f63315330a7d7a29a7bf91bd80d5ed09d30/diff:/var/lib/docker/overlay2/bd4639363b4cd6ded443b529120dd8a8ee0da3b91148db40e029ecde140ff0a9/diff:/var/lib/docker/overlay2/7f7e4a1a1d1a6dd1bebd609b693dd0e517550d181ae77fe0986c71c84dcf4685/diff:/var/lib/docker/overlay2/6e128307ff461ea7ecba7b91c1ec5f632e271895beff9132fd8a4f6bba76019f/diff:/var/lib/docker/overlay2/c6f766eac66e04a846d891a39e6ea42ad10450edc70f26fe51339c72ba1
be7c6/diff:/var/lib/docker/overlay2/4558507f9a68b74bcc11b8c176e1194283564816c45ac2962f9e2705b5d9ea67/diff:/var/lib/docker/overlay2/17d819f679d4b86551d5747b0d01acb86e5babf6a1c483eecf46a164f02b65ff/diff:/var/lib/docker/overlay2/44cef1e92e1d0ce4476e31feb188e2be3f10070a729d553f1f4be9b33fad2373/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7acb01e80bdde75329387f2525aa714ec90dfac3c5184a5a92e2b26b4814383a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7acb01e80bdde75329387f2525aa714ec90dfac3c5184a5a92e2b26b4814383a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7acb01e80bdde75329387f2525aa714ec90dfac3c5184a5a92e2b26b4814383a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-632213",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-632213/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-632213",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-632213",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-632213",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a85d23bd0a47bb6cfdd56d91c32ae0638ac36bbd7091021571c872dbaca7fbf9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34528"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34527"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34524"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34526"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34525"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a85d23bd0a47",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-632213": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f821810939c0",
	                        "missing-upgrade-632213"
	                    ],
	                    "NetworkID": "e66cbb05263d69555308e54a135084c14bbf960fc41c610ca7d53c19c200c510",
	                    "EndpointID": "fffd677c137be136325a7248f47e1dd5a332fa8bb0a7b928fd4439994bc836ac",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-632213 -n missing-upgrade-632213
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-632213 -n missing-upgrade-632213: exit status 6 (418.854455ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0109 00:38:15.832844 1792618 status.go:415] kubeconfig endpoint: got: 192.168.59.150:8443, want: 192.168.76.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-632213" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-632213" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-632213
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-632213: (4.72545647s)
--- FAIL: TestMissingContainerUpgrade (147.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (88.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.977299235.exe start -p stopped-upgrade-389816 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.17.0.977299235.exe start -p stopped-upgrade-389816 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m20.385804374s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.17.0.977299235.exe -p stopped-upgrade-389816 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.17.0.977299235.exe -p stopped-upgrade-389816 stop: (1.99819608s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-389816 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-389816 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.2867792s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-389816] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-1678586/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-1678586/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-389816 in cluster stopped-upgrade-389816
	* Pulling base image v0.0.42-1704751654-17830 ...
	* Restarting existing docker container for "stopped-upgrade-389816" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0109 00:39:44.092927 1799728 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:39:44.093149 1799728 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:39:44.093176 1799728 out.go:309] Setting ErrFile to fd 2...
	I0109 00:39:44.093195 1799728 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:39:44.093491 1799728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-1678586/.minikube/bin
	I0109 00:39:44.093934 1799728 out.go:303] Setting JSON to false
	I0109 00:39:44.094931 1799728 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":26526,"bootTime":1704734258,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0109 00:39:44.095035 1799728 start.go:138] virtualization:  
	I0109 00:39:44.098477 1799728 out.go:177] * [stopped-upgrade-389816] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0109 00:39:44.100526 1799728 out.go:177]   - MINIKUBE_LOCATION=17830
	I0109 00:39:44.102287 1799728 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:39:44.100675 1799728 notify.go:220] Checking for updates...
	I0109 00:39:44.105959 1799728 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-1678586/kubeconfig
	I0109 00:39:44.107990 1799728 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-1678586/.minikube
	I0109 00:39:44.109631 1799728 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0109 00:39:44.111415 1799728 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0109 00:39:44.113614 1799728 config.go:182] Loaded profile config "stopped-upgrade-389816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0109 00:39:44.115999 1799728 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0109 00:39:44.117834 1799728 driver.go:392] Setting default libvirt URI to qemu:///system
	I0109 00:39:44.148196 1799728 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0109 00:39:44.148310 1799728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0109 00:39:44.234817 1799728 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-09 00:39:44.225458569 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0109 00:39:44.234921 1799728 docker.go:295] overlay module found
	I0109 00:39:44.237539 1799728 out.go:177] * Using the docker driver based on existing profile
	I0109 00:39:44.239565 1799728 start.go:298] selected driver: docker
	I0109 00:39:44.239583 1799728 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-389816 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-389816 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.185 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0109 00:39:44.239685 1799728 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0109 00:39:44.240286 1799728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0109 00:39:44.307698 1799728 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-09 00:39:44.297881267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0109 00:39:44.308055 1799728 cni.go:84] Creating CNI manager for ""
	I0109 00:39:44.308090 1799728 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0109 00:39:44.308106 1799728 start_flags.go:323] config:
	{Name:stopped-upgrade-389816 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-389816 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.185 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0109 00:39:44.311498 1799728 out.go:177] * Starting control plane node stopped-upgrade-389816 in cluster stopped-upgrade-389816
	I0109 00:39:44.313198 1799728 cache.go:121] Beginning downloading kic base image for docker with crio
	I0109 00:39:44.315538 1799728 out.go:177] * Pulling base image v0.0.42-1704751654-17830 ...
	I0109 00:39:44.317470 1799728 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0109 00:39:44.317552 1799728 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0109 00:39:44.336234 1799728 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0109 00:39:44.336266 1799728 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0109 00:39:44.384783 1799728 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0109 00:39:44.384929 1799728 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/stopped-upgrade-389816/config.json ...
	I0109 00:39:44.385048 1799728 cache.go:107] acquiring lock: {Name:mk3bff1da4c2c9d99b8d2eaa6644fd637ad4fc93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:39:44.385138 1799728 cache.go:115] /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0109 00:39:44.385149 1799728 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 107.431µs
	I0109 00:39:44.385157 1799728 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0109 00:39:44.385168 1799728 cache.go:107] acquiring lock: {Name:mk3b40c0c9f88bfd61767222d81202cf3e22a163 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:39:44.385182 1799728 cache.go:194] Successfully downloaded all kic artifacts
	I0109 00:39:44.385197 1799728 cache.go:115] /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0109 00:39:44.385203 1799728 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 36.751µs
	I0109 00:39:44.385209 1799728 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0109 00:39:44.385217 1799728 start.go:365] acquiring machines lock for stopped-upgrade-389816: {Name:mk5bf3b72e1acc6d3ad769d554139f9b95794e04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:39:44.385232 1799728 cache.go:107] acquiring lock: {Name:mk9c7f656405221a0cc8f00eef48a4312a5772a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:39:44.385263 1799728 start.go:369] acquired machines lock for "stopped-upgrade-389816" in 32.681µs
	I0109 00:39:44.385272 1799728 cache.go:115] /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0109 00:39:44.385277 1799728 start.go:96] Skipping create...Using existing machine configuration
	I0109 00:39:44.385277 1799728 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 46.532µs
	I0109 00:39:44.385283 1799728 fix.go:54] fixHost starting: 
	I0109 00:39:44.385292 1799728 cache.go:107] acquiring lock: {Name:mk7ff1e446f4b29ae3f85102f468527aee0604ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:39:44.385329 1799728 cache.go:115] /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0109 00:39:44.385334 1799728 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 42.487µs
	I0109 00:39:44.385340 1799728 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0109 00:39:44.385348 1799728 cache.go:107] acquiring lock: {Name:mk822689e0c91f93c42f62581fcf619ccb20e1e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:39:44.385372 1799728 cache.go:115] /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0109 00:39:44.385377 1799728 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 29.383µs
	I0109 00:39:44.385384 1799728 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0109 00:39:44.385392 1799728 cache.go:107] acquiring lock: {Name:mka42d349ef27271ebaa44714a66503b1199159c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:39:44.385416 1799728 cache.go:115] /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0109 00:39:44.385420 1799728 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 28.94µs
	I0109 00:39:44.385426 1799728 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0109 00:39:44.385433 1799728 cache.go:107] acquiring lock: {Name:mkaa8567c2e8babe1144025b47a0b79e69be89fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:39:44.385456 1799728 cache.go:115] /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0109 00:39:44.385461 1799728 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 27.939µs
	I0109 00:39:44.385466 1799728 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0109 00:39:44.385218 1799728 cache.go:107] acquiring lock: {Name:mk73c5240ae84c48665d067b63f65c779eef85b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0109 00:39:44.385491 1799728 cache.go:115] /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0109 00:39:44.385499 1799728 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 279.519µs
	I0109 00:39:44.385506 1799728 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0109 00:39:44.385284 1799728 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0109 00:39:44.385512 1799728 cache.go:87] Successfully saved all images to host disk.
	I0109 00:39:44.385547 1799728 cli_runner.go:164] Run: docker container inspect stopped-upgrade-389816 --format={{.State.Status}}
	I0109 00:39:44.402231 1799728 fix.go:102] recreateIfNeeded on stopped-upgrade-389816: state=Stopped err=<nil>
	W0109 00:39:44.402267 1799728 fix.go:128] unexpected machine state, will restart: <nil>
	I0109 00:39:44.405763 1799728 out.go:177] * Restarting existing docker container for "stopped-upgrade-389816" ...
	I0109 00:39:44.407706 1799728 cli_runner.go:164] Run: docker start stopped-upgrade-389816
	I0109 00:39:44.725726 1799728 cli_runner.go:164] Run: docker container inspect stopped-upgrade-389816 --format={{.State.Status}}
	I0109 00:39:44.751262 1799728 kic.go:430] container "stopped-upgrade-389816" state is running.
	I0109 00:39:44.751660 1799728 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-389816
	I0109 00:39:44.775317 1799728 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/stopped-upgrade-389816/config.json ...
	I0109 00:39:44.775533 1799728 machine.go:88] provisioning docker machine ...
	I0109 00:39:44.775547 1799728 ubuntu.go:169] provisioning hostname "stopped-upgrade-389816"
	I0109 00:39:44.775594 1799728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-389816
	I0109 00:39:44.797446 1799728 main.go:141] libmachine: Using SSH client type: native
	I0109 00:39:44.797861 1799728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34541 <nil> <nil>}
	I0109 00:39:44.797874 1799728 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-389816 && echo "stopped-upgrade-389816" | sudo tee /etc/hostname
	I0109 00:39:44.798609 1799728 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0109 00:39:47.954943 1799728 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-389816
	
	I0109 00:39:47.955022 1799728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-389816
	I0109 00:39:47.973630 1799728 main.go:141] libmachine: Using SSH client type: native
	I0109 00:39:47.974052 1799728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34541 <nil> <nil>}
	I0109 00:39:47.974076 1799728 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-389816' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-389816/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-389816' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0109 00:39:48.115734 1799728 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0109 00:39:48.115762 1799728 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17830-1678586/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-1678586/.minikube}
	I0109 00:39:48.115792 1799728 ubuntu.go:177] setting up certificates
	I0109 00:39:48.115801 1799728 provision.go:83] configureAuth start
	I0109 00:39:48.115864 1799728 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-389816
	I0109 00:39:48.139187 1799728 provision.go:138] copyHostCerts
	I0109 00:39:48.139271 1799728 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem, removing ...
	I0109 00:39:48.139293 1799728 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem
	I0109 00:39:48.139369 1799728 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.pem (1082 bytes)
	I0109 00:39:48.139471 1799728 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem, removing ...
	I0109 00:39:48.139479 1799728 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem
	I0109 00:39:48.139506 1799728 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/cert.pem (1123 bytes)
	I0109 00:39:48.139569 1799728 exec_runner.go:144] found /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem, removing ...
	I0109 00:39:48.139582 1799728 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem
	I0109 00:39:48.139619 1799728 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-1678586/.minikube/key.pem (1679 bytes)
	I0109 00:39:48.139998 1799728 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-389816 san=[192.168.59.185 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-389816]
	I0109 00:39:48.379679 1799728 provision.go:172] copyRemoteCerts
	I0109 00:39:48.379768 1799728 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0109 00:39:48.379811 1799728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-389816
	I0109 00:39:48.400062 1799728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34541 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/stopped-upgrade-389816/id_rsa Username:docker}
	I0109 00:39:48.499400 1799728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0109 00:39:48.522033 1799728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0109 00:39:48.546302 1799728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0109 00:39:48.568930 1799728 provision.go:86] duration metric: configureAuth took 453.114044ms
	I0109 00:39:48.568953 1799728 ubuntu.go:193] setting minikube options for container-runtime
	I0109 00:39:48.569129 1799728 config.go:182] Loaded profile config "stopped-upgrade-389816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0109 00:39:48.569237 1799728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-389816
	I0109 00:39:48.586773 1799728 main.go:141] libmachine: Using SSH client type: native
	I0109 00:39:48.587193 1799728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 34541 <nil> <nil>}
	I0109 00:39:48.587215 1799728 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0109 00:39:49.015732 1799728 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0109 00:39:49.015810 1799728 machine.go:91] provisioned docker machine in 4.240266446s
	I0109 00:39:49.015836 1799728 start.go:300] post-start starting for "stopped-upgrade-389816" (driver="docker")
	I0109 00:39:49.015875 1799728 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0109 00:39:49.015967 1799728 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0109 00:39:49.016034 1799728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-389816
	I0109 00:39:49.039075 1799728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34541 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/stopped-upgrade-389816/id_rsa Username:docker}
	I0109 00:39:49.141202 1799728 ssh_runner.go:195] Run: cat /etc/os-release
	I0109 00:39:49.145274 1799728 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0109 00:39:49.145305 1799728 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0109 00:39:49.145317 1799728 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0109 00:39:49.145324 1799728 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0109 00:39:49.145333 1799728 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-1678586/.minikube/addons for local assets ...
	I0109 00:39:49.145394 1799728 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-1678586/.minikube/files for local assets ...
	I0109 00:39:49.145484 1799728 filesync.go:149] local asset: /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem -> 16839672.pem in /etc/ssl/certs
	I0109 00:39:49.145592 1799728 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0109 00:39:49.154038 1799728 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/ssl/certs/16839672.pem --> /etc/ssl/certs/16839672.pem (1708 bytes)
	I0109 00:39:49.177529 1799728 start.go:303] post-start completed in 161.664269ms
	I0109 00:39:49.177608 1799728 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0109 00:39:49.177649 1799728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-389816
	I0109 00:39:49.195670 1799728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34541 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/stopped-upgrade-389816/id_rsa Username:docker}
	I0109 00:39:49.293526 1799728 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0109 00:39:49.298896 1799728 fix.go:56] fixHost completed within 4.913604149s
	I0109 00:39:49.298918 1799728 start.go:83] releasing machines lock for "stopped-upgrade-389816", held for 4.913646759s
	I0109 00:39:49.298998 1799728 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-389816
	I0109 00:39:49.317436 1799728 ssh_runner.go:195] Run: cat /version.json
	I0109 00:39:49.317501 1799728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-389816
	I0109 00:39:49.317752 1799728 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0109 00:39:49.317794 1799728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-389816
	I0109 00:39:49.337654 1799728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34541 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/stopped-upgrade-389816/id_rsa Username:docker}
	I0109 00:39:49.343847 1799728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34541 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/stopped-upgrade-389816/id_rsa Username:docker}
	W0109 00:39:49.430466 1799728 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0109 00:39:49.430548 1799728 ssh_runner.go:195] Run: systemctl --version
	I0109 00:39:49.513785 1799728 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0109 00:39:49.774471 1799728 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0109 00:39:49.780326 1799728 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:39:49.803458 1799728 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0109 00:39:49.803541 1799728 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0109 00:39:49.833244 1799728 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0109 00:39:49.833328 1799728 start.go:475] detecting cgroup driver to use...
	I0109 00:39:49.833373 1799728 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0109 00:39:49.833465 1799728 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0109 00:39:49.860296 1799728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0109 00:39:49.871570 1799728 docker.go:203] disabling cri-docker service (if available) ...
	I0109 00:39:49.871634 1799728 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0109 00:39:49.883028 1799728 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0109 00:39:49.895468 1799728 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0109 00:39:49.908199 1799728 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0109 00:39:49.908269 1799728 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0109 00:39:50.012181 1799728 docker.go:219] disabling docker service ...
	I0109 00:39:50.012302 1799728 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0109 00:39:50.026022 1799728 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0109 00:39:50.041415 1799728 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0109 00:39:50.155872 1799728 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0109 00:39:50.266706 1799728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0109 00:39:50.278402 1799728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0109 00:39:50.295353 1799728 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0109 00:39:50.295423 1799728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0109 00:39:50.308716 1799728 out.go:177] 
	W0109 00:39:50.310886 1799728 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0109 00:39:50.310904 1799728 out.go:239] * 
	* 
	W0109 00:39:50.311810 1799728 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0109 00:39:50.313408 1799728 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-389816 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (88.67s)

                                                
                                    

Test pass (277/316)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 19.4
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.1
10 TestDownloadOnly/v1.28.4/json-events 15.38
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.09
17 TestDownloadOnly/v1.29.0-rc.2/json-events 20.52
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.22
23 TestDownloadOnly/DeleteAll 0.44
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.25
26 TestBinaryMirror 0.65
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
32 TestAddons/Setup 168.31
34 TestAddons/parallel/Registry 16.4
36 TestAddons/parallel/InspektorGadget 11.85
37 TestAddons/parallel/MetricsServer 6.9
40 TestAddons/parallel/CSI 73.25
41 TestAddons/parallel/Headlamp 11.43
42 TestAddons/parallel/CloudSpanner 5.62
43 TestAddons/parallel/LocalPath 51.38
44 TestAddons/parallel/NvidiaDevicePlugin 5.57
45 TestAddons/parallel/Yakd 6
48 TestAddons/serial/GCPAuth/Namespaces 0.18
49 TestAddons/StoppedEnableDisable 12.32
50 TestCertOptions 34.35
51 TestCertExpiration 238.86
53 TestForceSystemdFlag 38.46
54 TestForceSystemdEnv 48.69
60 TestErrorSpam/setup 33.92
61 TestErrorSpam/start 0.87
62 TestErrorSpam/status 1.17
63 TestErrorSpam/pause 1.9
64 TestErrorSpam/unpause 2.01
65 TestErrorSpam/stop 1.49
68 TestFunctional/serial/CopySyncFile 0
69 TestFunctional/serial/StartWithProxy 78.09
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 30.64
72 TestFunctional/serial/KubeContext 0.06
73 TestFunctional/serial/KubectlGetPods 0.09
76 TestFunctional/serial/CacheCmd/cache/add_remote 3.87
77 TestFunctional/serial/CacheCmd/cache/add_local 1.1
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
79 TestFunctional/serial/CacheCmd/cache/list 0.07
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
81 TestFunctional/serial/CacheCmd/cache/cache_reload 2.19
82 TestFunctional/serial/CacheCmd/cache/delete 0.15
83 TestFunctional/serial/MinikubeKubectlCmd 0.15
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
85 TestFunctional/serial/ExtraConfig 41.59
86 TestFunctional/serial/ComponentHealth 0.1
87 TestFunctional/serial/LogsCmd 1.81
88 TestFunctional/serial/LogsFileCmd 1.83
89 TestFunctional/serial/InvalidService 4.44
91 TestFunctional/parallel/ConfigCmd 0.62
92 TestFunctional/parallel/DashboardCmd 10.59
93 TestFunctional/parallel/DryRun 0.78
94 TestFunctional/parallel/InternationalLanguage 0.27
95 TestFunctional/parallel/StatusCmd 1.44
99 TestFunctional/parallel/ServiceCmdConnect 9.68
100 TestFunctional/parallel/AddonsCmd 0.19
101 TestFunctional/parallel/PersistentVolumeClaim 23.68
103 TestFunctional/parallel/SSHCmd 0.83
104 TestFunctional/parallel/CpCmd 2.31
106 TestFunctional/parallel/FileSync 0.43
107 TestFunctional/parallel/CertSync 2.47
111 TestFunctional/parallel/NodeLabels 0.11
113 TestFunctional/parallel/NonActiveRuntimeDisabled 0.9
115 TestFunctional/parallel/License 0.28
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.66
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.35
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
122 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
126 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
127 TestFunctional/parallel/ServiceCmd/DeployApp 7.21
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
129 TestFunctional/parallel/ProfileCmd/profile_list 0.43
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
131 TestFunctional/parallel/MountCmd/any-port 7.33
132 TestFunctional/parallel/ServiceCmd/List 0.56
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.7
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
135 TestFunctional/parallel/ServiceCmd/Format 0.44
136 TestFunctional/parallel/ServiceCmd/URL 0.45
137 TestFunctional/parallel/MountCmd/specific-port 2.33
138 TestFunctional/parallel/MountCmd/VerifyCleanup 2.24
139 TestFunctional/parallel/Version/short 0.1
140 TestFunctional/parallel/Version/components 1.24
141 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
142 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
143 TestFunctional/parallel/ImageCommands/ImageListJson 0.36
144 TestFunctional/parallel/ImageCommands/ImageListYaml 0.37
145 TestFunctional/parallel/ImageCommands/ImageBuild 3.3
146 TestFunctional/parallel/ImageCommands/Setup 2.67
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.54
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.56
149 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
150 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
151 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
152 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.93
153 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.96
154 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
155 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.32
156 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.97
157 TestFunctional/delete_addon-resizer_images 0.08
158 TestFunctional/delete_my-image_image 0.02
159 TestFunctional/delete_minikube_cached_images 0.02
163 TestIngressAddonLegacy/StartLegacyK8sCluster 97.06
165 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.43
166 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.76
170 TestJSONOutput/start/Command 90.72
171 TestJSONOutput/start/Audit 0
173 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/pause/Command 0.8
177 TestJSONOutput/pause/Audit 0
179 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/unpause/Command 0.74
183 TestJSONOutput/unpause/Audit 0
185 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/stop/Command 5.87
189 TestJSONOutput/stop/Audit 0
191 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
193 TestErrorJSONOutput 0.25
195 TestKicCustomNetwork/create_custom_network 48.6
196 TestKicCustomNetwork/use_default_bridge_network 35.34
197 TestKicExistingNetwork 38.42
198 TestKicCustomSubnet 33.92
199 TestKicStaticIP 33.25
200 TestMainNoArgs 0.07
201 TestMinikubeProfile 71.3
204 TestMountStart/serial/StartWithMountFirst 9.74
205 TestMountStart/serial/VerifyMountFirst 0.3
206 TestMountStart/serial/StartWithMountSecond 7.18
207 TestMountStart/serial/VerifyMountSecond 0.31
208 TestMountStart/serial/DeleteFirst 1.68
209 TestMountStart/serial/VerifyMountPostDelete 0.3
210 TestMountStart/serial/Stop 1.23
211 TestMountStart/serial/RestartStopped 7.8
212 TestMountStart/serial/VerifyMountPostStop 0.3
215 TestMultiNode/serial/FreshStart2Nodes 99.65
216 TestMultiNode/serial/DeployApp2Nodes 5.35
218 TestMultiNode/serial/AddNode 49.56
219 TestMultiNode/serial/MultiNodeLabels 0.09
220 TestMultiNode/serial/ProfileList 0.36
221 TestMultiNode/serial/CopyFile 11.42
222 TestMultiNode/serial/StopNode 2.42
223 TestMultiNode/serial/StartAfterStop 13.58
224 TestMultiNode/serial/RestartKeepsNodes 122.77
225 TestMultiNode/serial/DeleteNode 5.19
226 TestMultiNode/serial/StopMultiNode 23.96
227 TestMultiNode/serial/RestartMultiNode 86.01
228 TestMultiNode/serial/ValidateNameConflict 35.59
233 TestPreload 171.42
238 TestInsufficientStorage 10.75
241 TestKubernetesUpgrade 128.66
244 TestPause/serial/Start 83.67
245 TestPause/serial/SecondStartNoReconfiguration 34.57
246 TestPause/serial/Pause 0.8
247 TestPause/serial/VerifyStatus 0.35
248 TestPause/serial/Unpause 0.79
249 TestPause/serial/PauseAgain 1.02
250 TestPause/serial/DeletePaused 2.79
251 TestPause/serial/VerifyDeletedResources 0.18
252 TestStoppedBinaryUpgrade/Setup 1.07
254 TestStoppedBinaryUpgrade/MinikubeLogs 0.69
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
264 TestNoKubernetes/serial/StartWithK8s 40.86
265 TestNoKubernetes/serial/StartWithStopK8s 20.17
273 TestNetworkPlugins/group/false 5.49
274 TestNoKubernetes/serial/Start 7.12
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.41
279 TestNoKubernetes/serial/ProfileList 0.87
280 TestNoKubernetes/serial/Stop 1.32
281 TestNoKubernetes/serial/StartNoArgs 8.19
282 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
284 TestStartStop/group/old-k8s-version/serial/FirstStart 139.82
285 TestStartStop/group/old-k8s-version/serial/DeployApp 9.63
286 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.66
287 TestStartStop/group/old-k8s-version/serial/Stop 12.68
289 TestStartStop/group/no-preload/serial/FirstStart 73.03
290 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.32
291 TestStartStop/group/old-k8s-version/serial/SecondStart 441.51
292 TestStartStop/group/no-preload/serial/DeployApp 9.38
293 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
294 TestStartStop/group/no-preload/serial/Stop 12.02
295 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
296 TestStartStop/group/no-preload/serial/SecondStart 624.24
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
300 TestStartStop/group/old-k8s-version/serial/Pause 3.59
302 TestStartStop/group/embed-certs/serial/FirstStart 80.32
303 TestStartStop/group/embed-certs/serial/DeployApp 8.35
304 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.22
305 TestStartStop/group/embed-certs/serial/Stop 12.06
306 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
307 TestStartStop/group/embed-certs/serial/SecondStart 351.19
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
310 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
311 TestStartStop/group/no-preload/serial/Pause 3.44
313 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 79.67
314 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.36
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.22
316 TestStartStop/group/default-k8s-diff-port/serial/Stop 12
317 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
318 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 353.08
319 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 16.01
320 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
321 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
322 TestStartStop/group/embed-certs/serial/Pause 3.43
324 TestStartStop/group/newest-cni/serial/FirstStart 46.38
325 TestStartStop/group/newest-cni/serial/DeployApp 0
326 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.09
327 TestStartStop/group/newest-cni/serial/Stop 1.27
328 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
329 TestStartStop/group/newest-cni/serial/SecondStart 30.78
330 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
331 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
333 TestStartStop/group/newest-cni/serial/Pause 3.18
334 TestNetworkPlugins/group/auto/Start 80.56
335 TestNetworkPlugins/group/auto/KubeletFlags 0.34
336 TestNetworkPlugins/group/auto/NetCatPod 9.27
337 TestNetworkPlugins/group/auto/DNS 0.2
338 TestNetworkPlugins/group/auto/Localhost 0.17
339 TestNetworkPlugins/group/auto/HairPin 0.17
340 TestNetworkPlugins/group/kindnet/Start 57.73
341 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.01
342 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
343 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.33
344 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
345 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.47
346 TestNetworkPlugins/group/kindnet/KubeletFlags 0.42
347 TestNetworkPlugins/group/kindnet/NetCatPod 12.38
348 TestNetworkPlugins/group/calico/Start 84.01
349 TestNetworkPlugins/group/kindnet/DNS 0.28
350 TestNetworkPlugins/group/kindnet/Localhost 0.21
351 TestNetworkPlugins/group/kindnet/HairPin 0.18
352 TestNetworkPlugins/group/custom-flannel/Start 76.32
353 TestNetworkPlugins/group/calico/ControllerPod 6.01
354 TestNetworkPlugins/group/calico/KubeletFlags 0.43
355 TestNetworkPlugins/group/calico/NetCatPod 13.32
356 TestNetworkPlugins/group/calico/DNS 0.21
357 TestNetworkPlugins/group/calico/Localhost 0.16
358 TestNetworkPlugins/group/calico/HairPin 0.16
359 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.51
360 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.37
361 TestNetworkPlugins/group/custom-flannel/DNS 0.25
362 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
363 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
364 TestNetworkPlugins/group/enable-default-cni/Start 93.96
365 TestNetworkPlugins/group/flannel/Start 67.72
366 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
367 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
368 TestNetworkPlugins/group/flannel/ControllerPod 6.01
369 TestNetworkPlugins/group/flannel/KubeletFlags 0.34
370 TestNetworkPlugins/group/flannel/NetCatPod 12.26
371 TestNetworkPlugins/group/enable-default-cni/DNS 0.26
372 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
373 TestNetworkPlugins/group/enable-default-cni/HairPin 0.22
374 TestNetworkPlugins/group/flannel/DNS 0.31
375 TestNetworkPlugins/group/flannel/Localhost 0.23
376 TestNetworkPlugins/group/flannel/HairPin 0.23
377 TestNetworkPlugins/group/bridge/Start 88.49
378 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
379 TestNetworkPlugins/group/bridge/NetCatPod 9.26
380 TestNetworkPlugins/group/bridge/DNS 0.18
381 TestNetworkPlugins/group/bridge/Localhost 0.16
382 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.16.0/json-events (19.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-345068 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-345068 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (19.4008006s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (19.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-345068
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-345068: exit status 85 (95.167641ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-345068 | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC |          |
	|         | -p download-only-345068        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/09 00:00:34
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0109 00:00:34.569988 1683972 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:00:34.570200 1683972 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:00:34.570227 1683972 out.go:309] Setting ErrFile to fd 2...
	I0109 00:00:34.570249 1683972 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:00:34.570567 1683972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-1678586/.minikube/bin
	W0109 00:00:34.570748 1683972 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17830-1678586/.minikube/config/config.json: open /home/jenkins/minikube-integration/17830-1678586/.minikube/config/config.json: no such file or directory
	I0109 00:00:34.571222 1683972 out.go:303] Setting JSON to true
	I0109 00:00:34.572099 1683972 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":24177,"bootTime":1704734258,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0109 00:00:34.572198 1683972 start.go:138] virtualization:  
	I0109 00:00:34.581104 1683972 out.go:97] [download-only-345068] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0109 00:00:34.589786 1683972 out.go:169] MINIKUBE_LOCATION=17830
	W0109 00:00:34.581387 1683972 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball: no such file or directory
	I0109 00:00:34.581469 1683972 notify.go:220] Checking for updates...
	I0109 00:00:34.596626 1683972 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:00:34.599471 1683972 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17830-1678586/kubeconfig
	I0109 00:00:34.601661 1683972 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-1678586/.minikube
	I0109 00:00:34.603502 1683972 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0109 00:00:34.607403 1683972 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0109 00:00:34.607664 1683972 driver.go:392] Setting default libvirt URI to qemu:///system
	I0109 00:00:34.631236 1683972 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0109 00:00:34.631345 1683972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0109 00:00:34.729146 1683972 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2024-01-09 00:00:34.719553699 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0109 00:00:34.729243 1683972 docker.go:295] overlay module found
	I0109 00:00:34.731489 1683972 out.go:97] Using the docker driver based on user configuration
	I0109 00:00:34.731522 1683972 start.go:298] selected driver: docker
	I0109 00:00:34.731582 1683972 start.go:902] validating driver "docker" against <nil>
	I0109 00:00:34.731716 1683972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0109 00:00:34.807421 1683972 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2024-01-09 00:00:34.798142554 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0109 00:00:34.807580 1683972 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0109 00:00:34.807857 1683972 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0109 00:00:34.808039 1683972 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0109 00:00:34.810575 1683972 out.go:169] Using Docker driver with root privileges
	I0109 00:00:34.812820 1683972 cni.go:84] Creating CNI manager for ""
	I0109 00:00:34.812855 1683972 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0109 00:00:34.812875 1683972 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0109 00:00:34.812889 1683972 start_flags.go:323] config:
	{Name:download-only-345068 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-345068 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:00:34.814997 1683972 out.go:97] Starting control plane node download-only-345068 in cluster download-only-345068
	I0109 00:00:34.815019 1683972 cache.go:121] Beginning downloading kic base image for docker with crio
	I0109 00:00:34.816972 1683972 out.go:97] Pulling base image v0.0.42-1704751654-17830 ...
	I0109 00:00:34.816998 1683972 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0109 00:00:34.817152 1683972 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon
	I0109 00:00:34.834672 1683972 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 to local cache
	I0109 00:00:34.834861 1683972 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local cache directory
	I0109 00:00:34.834968 1683972 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 to local cache
	I0109 00:00:34.878515 1683972 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0109 00:00:34.878554 1683972 cache.go:56] Caching tarball of preloaded images
	I0109 00:00:34.878718 1683972 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0109 00:00:34.881323 1683972 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0109 00:00:34.881357 1683972 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0109 00:00:34.994529 1683972 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0109 00:00:41.308612 1683972 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 as a tarball
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-345068"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (15.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-345068 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-345068 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (15.378768566s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (15.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-345068
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-345068: exit status 85 (87.63869ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-345068 | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC |          |
	|         | -p download-only-345068        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-345068 | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC |          |
	|         | -p download-only-345068        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/09 00:00:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0109 00:00:54.068168 1684046 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:00:54.068311 1684046 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:00:54.068321 1684046 out.go:309] Setting ErrFile to fd 2...
	I0109 00:00:54.068326 1684046 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:00:54.068603 1684046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-1678586/.minikube/bin
	W0109 00:00:54.068723 1684046 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17830-1678586/.minikube/config/config.json: open /home/jenkins/minikube-integration/17830-1678586/.minikube/config/config.json: no such file or directory
	I0109 00:00:54.068971 1684046 out.go:303] Setting JSON to true
	I0109 00:00:54.069801 1684046 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":24196,"bootTime":1704734258,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0109 00:00:54.069879 1684046 start.go:138] virtualization:  
	I0109 00:00:54.072349 1684046 out.go:97] [download-only-345068] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0109 00:00:54.074538 1684046 out.go:169] MINIKUBE_LOCATION=17830
	I0109 00:00:54.072634 1684046 notify.go:220] Checking for updates...
	I0109 00:00:54.076670 1684046 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:00:54.078842 1684046 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17830-1678586/kubeconfig
	I0109 00:00:54.080861 1684046 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-1678586/.minikube
	I0109 00:00:54.082774 1684046 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0109 00:00:54.087019 1684046 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0109 00:00:54.087594 1684046 config.go:182] Loaded profile config "download-only-345068": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0109 00:00:54.087680 1684046 start.go:810] api.Load failed for download-only-345068: filestore "download-only-345068": Docker machine "download-only-345068" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0109 00:00:54.087783 1684046 driver.go:392] Setting default libvirt URI to qemu:///system
	W0109 00:00:54.087812 1684046 start.go:810] api.Load failed for download-only-345068: filestore "download-only-345068": Docker machine "download-only-345068" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0109 00:00:54.112762 1684046 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0109 00:00:54.112893 1684046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0109 00:00:54.197139 1684046 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-09 00:00:54.185940177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0109 00:00:54.197244 1684046 docker.go:295] overlay module found
	I0109 00:00:54.199421 1684046 out.go:97] Using the docker driver based on existing profile
	I0109 00:00:54.199446 1684046 start.go:298] selected driver: docker
	I0109 00:00:54.199453 1684046 start.go:902] validating driver "docker" against &{Name:download-only-345068 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-345068 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:00:54.199659 1684046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0109 00:00:54.265576 1684046 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-09 00:00:54.256479648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0109 00:00:54.266027 1684046 cni.go:84] Creating CNI manager for ""
	I0109 00:00:54.266048 1684046 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0109 00:00:54.266062 1684046 start_flags.go:323] config:
	{Name:download-only-345068 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-345068 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I0109 00:00:54.277656 1684046 out.go:97] Starting control plane node download-only-345068 in cluster download-only-345068
	I0109 00:00:54.277690 1684046 cache.go:121] Beginning downloading kic base image for docker with crio
	I0109 00:00:54.283990 1684046 out.go:97] Pulling base image v0.0.42-1704751654-17830 ...
	I0109 00:00:54.284032 1684046 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:00:54.284117 1684046 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon
	I0109 00:00:54.301214 1684046 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 to local cache
	I0109 00:00:54.301344 1684046 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local cache directory
	I0109 00:00:54.301370 1684046 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local cache directory, skipping pull
	I0109 00:00:54.301375 1684046 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 exists in cache, skipping pull
	I0109 00:00:54.301383 1684046 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 as a tarball
	I0109 00:00:54.348397 1684046 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0109 00:00:54.348420 1684046 cache.go:56] Caching tarball of preloaded images
	I0109 00:00:54.348583 1684046 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:00:54.350998 1684046 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0109 00:00:54.351023 1684046 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 ...
	I0109 00:00:54.463298 1684046 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4?checksum=md5:23e2271fd1a7b32f52ce36ae8363c081 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0109 00:01:07.687281 1684046 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 ...
	I0109 00:01:07.687392 1684046 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 ...
	I0109 00:01:08.608048 1684046 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0109 00:01:08.608185 1684046 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/download-only-345068/config.json ...
	I0109 00:01:08.608400 1684046 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0109 00:01:08.608593 1684046 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/linux/arm64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-345068"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (20.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-345068 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-345068 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (20.524079923s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (20.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-345068
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-345068: exit status 85 (221.434706ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-345068 | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC |          |
	|         | -p download-only-345068           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-345068 | jenkins | v1.32.0 | 09 Jan 24 00:00 UTC |          |
	|         | -p download-only-345068           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-345068 | jenkins | v1.32.0 | 09 Jan 24 00:01 UTC |          |
	|         | -p download-only-345068           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/09 00:01:09
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0109 00:01:09.532796 1684118 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:01:09.533014 1684118 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:01:09.533027 1684118 out.go:309] Setting ErrFile to fd 2...
	I0109 00:01:09.533034 1684118 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:01:09.533363 1684118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-1678586/.minikube/bin
	W0109 00:01:09.533591 1684118 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17830-1678586/.minikube/config/config.json: open /home/jenkins/minikube-integration/17830-1678586/.minikube/config/config.json: no such file or directory
	I0109 00:01:09.533876 1684118 out.go:303] Setting JSON to true
	I0109 00:01:09.534800 1684118 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":24212,"bootTime":1704734258,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0109 00:01:09.534872 1684118 start.go:138] virtualization:  
	I0109 00:01:09.537595 1684118 out.go:97] [download-only-345068] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0109 00:01:09.539851 1684118 out.go:169] MINIKUBE_LOCATION=17830
	I0109 00:01:09.537967 1684118 notify.go:220] Checking for updates...
	I0109 00:01:09.541889 1684118 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:01:09.544101 1684118 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17830-1678586/kubeconfig
	I0109 00:01:09.545977 1684118 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-1678586/.minikube
	I0109 00:01:09.548119 1684118 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0109 00:01:09.551916 1684118 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0109 00:01:09.552445 1684118 config.go:182] Loaded profile config "download-only-345068": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W0109 00:01:09.552530 1684118 start.go:810] api.Load failed for download-only-345068: filestore "download-only-345068": Docker machine "download-only-345068" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0109 00:01:09.552635 1684118 driver.go:392] Setting default libvirt URI to qemu:///system
	W0109 00:01:09.552677 1684118 start.go:810] api.Load failed for download-only-345068: filestore "download-only-345068": Docker machine "download-only-345068" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0109 00:01:09.576204 1684118 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0109 00:01:09.576321 1684118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0109 00:01:09.661012 1684118 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-09 00:01:09.651087958 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0109 00:01:09.661114 1684118 docker.go:295] overlay module found
	I0109 00:01:09.663288 1684118 out.go:97] Using the docker driver based on existing profile
	I0109 00:01:09.663315 1684118 start.go:298] selected driver: docker
	I0109 00:01:09.663320 1684118 start.go:902] validating driver "docker" against &{Name:download-only-345068 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-345068 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:01:09.663504 1684118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0109 00:01:09.739496 1684118 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-09 00:01:09.730032213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0109 00:01:09.739961 1684118 cni.go:84] Creating CNI manager for ""
	I0109 00:01:09.739980 1684118 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0109 00:01:09.739991 1684118 start_flags.go:323] config:
	{Name:download-only-345068 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-345068 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0
s GPUs:}
	I0109 00:01:09.742852 1684118 out.go:97] Starting control plane node download-only-345068 in cluster download-only-345068
	I0109 00:01:09.742874 1684118 cache.go:121] Beginning downloading kic base image for docker with crio
	I0109 00:01:09.745153 1684118 out.go:97] Pulling base image v0.0.42-1704751654-17830 ...
	I0109 00:01:09.745177 1684118 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0109 00:01:09.745278 1684118 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local docker daemon
	I0109 00:01:09.762120 1684118 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 to local cache
	I0109 00:01:09.762269 1684118 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local cache directory
	I0109 00:01:09.762289 1684118 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 in local cache directory, skipping pull
	I0109 00:01:09.762294 1684118 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 exists in cache, skipping pull
	I0109 00:01:09.762302 1684118 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 as a tarball
	I0109 00:01:09.807202 1684118 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I0109 00:01:09.807233 1684118 cache.go:56] Caching tarball of preloaded images
	I0109 00:01:09.807402 1684118 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0109 00:01:09.810085 1684118 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0109 00:01:09.810106 1684118 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I0109 00:01:09.923700 1684118 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:307124b87428587d9288b24ec2db2592 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I0109 00:01:28.333514 1684118 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I0109 00:01:28.333623 1684118 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I0109 00:01:29.212315 1684118 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0109 00:01:29.212471 1684118 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/download-only-345068/config.json ...
	I0109 00:01:29.212708 1684118 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0109 00:01:29.212948 1684118 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/17830-1678586/.minikube/cache/linux/arm64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-345068"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.22s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.44s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.44s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-345068
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.25s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-421596 --alsologtostderr --binary-mirror http://127.0.0.1:35921 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-421596" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-421596
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-983119
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-983119: exit status 85 (94.138832ms)

                                                
                                                
-- stdout --
	* Profile "addons-983119" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-983119"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-983119
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-983119: exit status 85 (96.150682ms)

                                                
                                                
-- stdout --
	* Profile "addons-983119" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-983119"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (168.31s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-983119 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-983119 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m48.30738088s)
--- PASS: TestAddons/Setup (168.31s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 50.560089ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-lwbmr" [10d6756f-1d99-487a-9be4-279128cdb09c] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005592176s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-dbt9w" [a1758f6f-e461-403a-82f6-be54e122eb97] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004412845s
addons_test.go:340: (dbg) Run:  kubectl --context addons-983119 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-983119 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-983119 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.18613843s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-983119 ip
2024/01/09 00:04:36 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-983119 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.40s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.85s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-5q25b" [24472b6e-06ea-4cb8-bdc4-4e5f965d5784] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003835011s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-983119
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-983119: (5.843199934s)
--- PASS: TestAddons/parallel/InspektorGadget (11.85s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.9s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 16.134344ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-wvvsq" [3b0056d9-627e-46f2-a86a-e7f4cc7ca3da] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004827817s
addons_test.go:415: (dbg) Run:  kubectl --context addons-983119 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-983119 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.90s)

                                                
                                    
x
+
TestAddons/parallel/CSI (73.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 49.886474ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-983119 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-983119 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [31e6889c-202c-41ba-93a5-61457ccfed3b] Pending
helpers_test.go:344: "task-pv-pod" [31e6889c-202c-41ba-93a5-61457ccfed3b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [31e6889c-202c-41ba-93a5-61457ccfed3b] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004283165s
addons_test.go:584: (dbg) Run:  kubectl --context addons-983119 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-983119 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-983119 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-983119 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-983119 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-983119 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-983119 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [29985d0f-d9b5-4844-a32f-6b0adfa41c6a] Pending
helpers_test.go:344: "task-pv-pod-restore" [29985d0f-d9b5-4844-a32f-6b0adfa41c6a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [29985d0f-d9b5-4844-a32f-6b0adfa41c6a] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003933995s
addons_test.go:626: (dbg) Run:  kubectl --context addons-983119 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-983119 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-983119 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-983119 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-983119 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.805282242s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-983119 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (73.25s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-983119 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-983119 --alsologtostderr -v=1: (1.428071345s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-b9vrz" [a8d747d8-cdb7-4c43-80dc-737edf69cf0f] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-b9vrz" [a8d747d8-cdb7-4c43-80dc-737edf69cf0f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-b9vrz" [a8d747d8-cdb7-4c43-80dc-737edf69cf0f] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.00399852s
--- PASS: TestAddons/parallel/Headlamp (11.43s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-sdfz2" [f5ef17b2-8c1f-4c35-b421-646a3bbc8a55] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004514331s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-983119
--- PASS: TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.38s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-983119 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-983119 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-983119 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [f3bcc496-ae61-4b1b-b102-79534dd26647] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [f3bcc496-ae61-4b1b-b102-79534dd26647] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [f3bcc496-ae61-4b1b-b102-79534dd26647] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003996537s
addons_test.go:891: (dbg) Run:  kubectl --context addons-983119 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-983119 ssh "cat /opt/local-path-provisioner/pvc-0fb851d4-2568-488a-8306-8d95aae72b4e_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-983119 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-983119 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-983119 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-983119 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.214171045s)
--- PASS: TestAddons/parallel/LocalPath (51.38s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2qj49" [5d4c1201-ce21-4462-b0d3-1bf7598039b3] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004816808s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-983119
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.57s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-fh4qw" [a986270a-d264-45aa-b4ae-a3af25285329] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003457251s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-983119 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-983119 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.32s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-983119
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-983119: (11.968993725s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-983119
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-983119
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-983119
--- PASS: TestAddons/StoppedEnableDisable (12.32s)

                                                
                                    
x
+
TestCertOptions (34.35s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-211865 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-211865 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (31.622533442s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-211865 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-211865 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-211865 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-211865" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-211865
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-211865: (2.016764381s)
--- PASS: TestCertOptions (34.35s)

                                                
                                    
x
+
TestCertExpiration (238.86s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-843446 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-843446 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (37.557251833s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-843446 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-843446 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (18.358334909s)
helpers_test.go:175: Cleaning up "cert-expiration-843446" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-843446
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-843446: (2.946268043s)
--- PASS: TestCertExpiration (238.86s)

                                                
                                    
x
+
TestForceSystemdFlag (38.46s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-502460 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0109 00:42:23.998667 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-502460 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.527667191s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-502460 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-502460" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-502460
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-502460: (2.458617334s)
--- PASS: TestForceSystemdFlag (38.46s)

                                                
                                    
x
+
TestForceSystemdEnv (48.69s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-274058 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-274058 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (43.482457943s)
helpers_test.go:175: Cleaning up "force-systemd-env-274058" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-274058
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-274058: (5.203806361s)
--- PASS: TestForceSystemdEnv (48.69s)

                                                
                                    
x
+
TestErrorSpam/setup (33.92s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-385728 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-385728 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-385728 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-385728 --driver=docker  --container-runtime=crio: (33.922808919s)
--- PASS: TestErrorSpam/setup (33.92s)

                                                
                                    
x
+
TestErrorSpam/start (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-385728 --log_dir /tmp/nospam-385728 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-385728 --log_dir /tmp/nospam-385728 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-385728 --log_dir /tmp/nospam-385728 start --dry-run
--- PASS: TestErrorSpam/start (0.87s)

                                                
                                    
x
+
TestErrorSpam/status (1.17s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-385728 --log_dir /tmp/nospam-385728 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-385728 --log_dir /tmp/nospam-385728 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-385728 --log_dir /tmp/nospam-385728 status
--- PASS: TestErrorSpam/status (1.17s)

                                                
                                    
x
+
TestErrorSpam/pause (1.9s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-385728 --log_dir /tmp/nospam-385728 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-385728 --log_dir /tmp/nospam-385728 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-385728 --log_dir /tmp/nospam-385728 pause
--- PASS: TestErrorSpam/pause (1.90s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.01s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-385728 --log_dir /tmp/nospam-385728 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-385728 --log_dir /tmp/nospam-385728 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-385728 --log_dir /tmp/nospam-385728 unpause
--- PASS: TestErrorSpam/unpause (2.01s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-385728 --log_dir /tmp/nospam-385728 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-385728 --log_dir /tmp/nospam-385728 stop: (1.263411972s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-385728 --log_dir /tmp/nospam-385728 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-385728 --log_dir /tmp/nospam-385728 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17830-1678586/.minikube/files/etc/test/nested/copy/1683967/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.09s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-arm64 start -p functional-451422 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0109 00:09:20.953766 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
E0109 00:09:20.959408 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
E0109 00:09:20.969656 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
E0109 00:09:20.989897 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
E0109 00:09:21.030364 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
E0109 00:09:21.110624 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
E0109 00:09:21.270957 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
E0109 00:09:21.591447 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
E0109 00:09:22.232270 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
E0109 00:09:23.512486 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
E0109 00:09:26.072735 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
E0109 00:09:31.193383 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
E0109 00:09:41.434361 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
E0109 00:10:01.915230 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
functional_test.go:2233: (dbg) Done: out/minikube-linux-arm64 start -p functional-451422 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m18.090675404s)
--- PASS: TestFunctional/serial/StartWithProxy (78.09s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (30.64s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-451422 --alsologtostderr -v=8
E0109 00:10:42.875992 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-451422 --alsologtostderr -v=8: (30.636451402s)
functional_test.go:659: soft start took 30.642278564s for "functional-451422" cluster.
--- PASS: TestFunctional/serial/SoftStart (30.64s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-451422 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-451422 cache add registry.k8s.io/pause:3.1: (1.273473574s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-451422 cache add registry.k8s.io/pause:3.3: (1.312491014s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-451422 cache add registry.k8s.io/pause:latest: (1.281242939s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-451422 /tmp/TestFunctionalserialCacheCmdcacheadd_local817415328/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 cache add minikube-local-cache-test:functional-451422
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 cache delete minikube-local-cache-test:functional-451422
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-451422
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-451422 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (349.850145ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-451422 cache reload: (1.115882229s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 kubectl -- --context functional-451422 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-451422 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.59s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-451422 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-451422 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.585870513s)
functional_test.go:757: restart took 41.585982013s for "functional-451422" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.59s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-451422 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-451422 logs: (1.809094672s)
--- PASS: TestFunctional/serial/LogsCmd (1.81s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 logs --file /tmp/TestFunctionalserialLogsFileCmd241398653/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-451422 logs --file /tmp/TestFunctionalserialLogsFileCmd241398653/001/logs.txt: (1.826615443s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.83s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.44s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-451422 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-451422
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-451422: exit status 115 (488.067726ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32358 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-451422 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.44s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-451422 config get cpus: exit status 14 (98.921983ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-451422 config get cpus: exit status 14 (110.348644ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-451422 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-451422 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1708727: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.59s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-451422 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-451422 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (395.090522ms)

                                                
                                                
-- stdout --
	* [functional-451422] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-1678586/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-1678586/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0109 00:12:27.748018 1708131 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:12:27.748651 1708131 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:12:27.752107 1708131 out.go:309] Setting ErrFile to fd 2...
	I0109 00:12:27.752169 1708131 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:12:27.752510 1708131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-1678586/.minikube/bin
	I0109 00:12:27.752985 1708131 out.go:303] Setting JSON to false
	I0109 00:12:27.753940 1708131 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":24890,"bootTime":1704734258,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0109 00:12:27.754050 1708131 start.go:138] virtualization:  
	I0109 00:12:27.757712 1708131 out.go:177] * [functional-451422] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0109 00:12:27.759465 1708131 out.go:177]   - MINIKUBE_LOCATION=17830
	I0109 00:12:27.759585 1708131 notify.go:220] Checking for updates...
	I0109 00:12:27.761275 1708131 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:12:27.763430 1708131 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-1678586/kubeconfig
	I0109 00:12:27.765670 1708131 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-1678586/.minikube
	I0109 00:12:27.767994 1708131 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0109 00:12:27.769919 1708131 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0109 00:12:27.772346 1708131 config.go:182] Loaded profile config "functional-451422": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:12:27.772865 1708131 driver.go:392] Setting default libvirt URI to qemu:///system
	I0109 00:12:27.839952 1708131 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0109 00:12:27.840075 1708131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0109 00:12:27.994879 1708131 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2024-01-09 00:12:27.984296875 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0109 00:12:27.994989 1708131 docker.go:295] overlay module found
	I0109 00:12:27.998715 1708131 out.go:177] * Using the docker driver based on existing profile
	I0109 00:12:28.000607 1708131 start.go:298] selected driver: docker
	I0109 00:12:28.000641 1708131 start.go:902] validating driver "docker" against &{Name:functional-451422 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-451422 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:12:28.000892 1708131 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0109 00:12:28.005804 1708131 out.go:177] 
	W0109 00:12:28.007634 1708131 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0109 00:12:28.009702 1708131 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-451422 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-451422 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-451422 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (268.973895ms)

                                                
                                                
-- stdout --
	* [functional-451422] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-1678586/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-1678586/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0109 00:12:27.439737 1708066 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:12:27.439922 1708066 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:12:27.439928 1708066 out.go:309] Setting ErrFile to fd 2...
	I0109 00:12:27.439934 1708066 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:12:27.441761 1708066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-1678586/.minikube/bin
	I0109 00:12:27.442203 1708066 out.go:303] Setting JSON to false
	I0109 00:12:27.443443 1708066 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":24890,"bootTime":1704734258,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0109 00:12:27.443515 1708066 start.go:138] virtualization:  
	I0109 00:12:27.447929 1708066 out.go:177] * [functional-451422] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0109 00:12:27.449975 1708066 notify.go:220] Checking for updates...
	I0109 00:12:27.450839 1708066 out.go:177]   - MINIKUBE_LOCATION=17830
	I0109 00:12:27.452834 1708066 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:12:27.455398 1708066 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-1678586/kubeconfig
	I0109 00:12:27.457585 1708066 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-1678586/.minikube
	I0109 00:12:27.465025 1708066 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0109 00:12:27.467860 1708066 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0109 00:12:27.472332 1708066 config.go:182] Loaded profile config "functional-451422": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:12:27.473128 1708066 driver.go:392] Setting default libvirt URI to qemu:///system
	I0109 00:12:27.519811 1708066 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0109 00:12:27.519949 1708066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0109 00:12:27.608043 1708066 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2024-01-09 00:12:27.597097467 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0109 00:12:27.608145 1708066 docker.go:295] overlay module found
	I0109 00:12:27.610925 1708066 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0109 00:12:27.612925 1708066 start.go:298] selected driver: docker
	I0109 00:12:27.612943 1708066 start.go:902] validating driver "docker" against &{Name:functional-451422 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-451422 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0109 00:12:27.613042 1708066 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0109 00:12:27.615936 1708066 out.go:177] 
	W0109 00:12:27.617881 1708066 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0109 00:12:27.619895 1708066 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-451422 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-451422 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-l4f6r" [51e4aa58-51b1-42ad-95da-8a6b335fdc30] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-l4f6r" [51e4aa58-51b1-42ad-95da-8a6b335fdc30] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.003947495s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30342
functional_test.go:1674: http://192.168.49.2:30342: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-l4f6r

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30342
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [be38584e-e650-4f1c-aec0-a4423e4fadbb] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003977521s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-451422 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-451422 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-451422 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-451422 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [09e35b7f-5d9f-4fc2-bce3-21ec62640a7d] Pending
helpers_test.go:344: "sp-pod" [09e35b7f-5d9f-4fc2-bce3-21ec62640a7d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0109 00:12:04.796704 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [09e35b7f-5d9f-4fc2-bce3-21ec62640a7d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004098677s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-451422 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-451422 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-451422 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5706a679-9a75-4459-9ed5-647eb03ef5c8] Pending
helpers_test.go:344: "sp-pod" [5706a679-9a75-4459-9ed5-647eb03ef5c8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.00421552s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-451422 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.68s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh -n functional-451422 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 cp functional-451422:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd723000391/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh -n functional-451422 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh -n functional-451422 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/1683967/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh "sudo cat /etc/test/nested/copy/1683967/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/1683967.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh "sudo cat /etc/ssl/certs/1683967.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/1683967.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh "sudo cat /usr/share/ca-certificates/1683967.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/16839672.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh "sudo cat /etc/ssl/certs/16839672.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/16839672.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh "sudo cat /usr/share/ca-certificates/16839672.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-451422 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-451422 ssh "sudo systemctl is-active docker": exit status 1 (456.847142ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-451422 ssh "sudo systemctl is-active containerd": exit status 1 (438.781061ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-451422 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-451422 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-451422 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1706096: os: process already finished
helpers_test.go:502: unable to terminate pid 1705964: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-451422 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-451422 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-451422 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [19909a03-5368-402f-845a-aa93fb1f39f2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [19909a03-5368-402f-845a-aa93fb1f39f2] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004255102s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-451422 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.32.133 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-451422 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-451422 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-451422 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-z2fgm" [fcccd895-23fe-42f6-a3d9-72c58832c0ff] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-z2fgm" [fcccd895-23fe-42f6-a3d9-72c58832c0ff] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004174989s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "359.37992ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "70.558717ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "360.110092ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "73.403945ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-451422 /tmp/TestFunctionalparallelMountCmdany-port3266410424/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1704759140250183119" to /tmp/TestFunctionalparallelMountCmdany-port3266410424/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1704759140250183119" to /tmp/TestFunctionalparallelMountCmdany-port3266410424/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1704759140250183119" to /tmp/TestFunctionalparallelMountCmdany-port3266410424/001/test-1704759140250183119
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-451422 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (389.091774ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan  9 00:12 created-by-test
-rw-r--r-- 1 docker docker 24 Jan  9 00:12 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan  9 00:12 test-1704759140250183119
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh cat /mount-9p/test-1704759140250183119
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-451422 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [9d37e558-262b-4a16-8ab5-5ceb9e6b7b4d] Pending
helpers_test.go:344: "busybox-mount" [9d37e558-262b-4a16-8ab5-5ceb9e6b7b4d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [9d37e558-262b-4a16-8ab5-5ceb9e6b7b4d] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [9d37e558-262b-4a16-8ab5-5ceb9e6b7b4d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003815632s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-451422 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-451422 /tmp/TestFunctionalparallelMountCmdany-port3266410424/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 service list -o json
functional_test.go:1493: Took "700.839301ms" to run "out/minikube-linux-arm64 -p functional-451422 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30886
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30886
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-451422 /tmp/TestFunctionalparallelMountCmdspecific-port3470784672/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-451422 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (573.481923ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-451422 /tmp/TestFunctionalparallelMountCmdspecific-port3470784672/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-451422 ssh "sudo umount -f /mount-9p": exit status 1 (416.849551ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-451422 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-451422 /tmp/TestFunctionalparallelMountCmdspecific-port3470784672/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-451422 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1986194875/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-451422 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1986194875/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-451422 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1986194875/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-451422 ssh "findmnt -T" /mount1: (1.284865398s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-451422 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-451422 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1986194875/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-451422 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1986194875/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-451422 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1986194875/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-linux-arm64 -p functional-451422 version -o=json --components: (1.241159944s)
--- PASS: TestFunctional/parallel/Version/components (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-451422 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-451422
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-451422 image ls --format short --alsologtostderr:
I0109 00:12:54.947713 1710720 out.go:296] Setting OutFile to fd 1 ...
I0109 00:12:54.947957 1710720 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0109 00:12:54.947985 1710720 out.go:309] Setting ErrFile to fd 2...
I0109 00:12:54.948005 1710720 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0109 00:12:54.948312 1710720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-1678586/.minikube/bin
I0109 00:12:54.949031 1710720 config.go:182] Loaded profile config "functional-451422": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0109 00:12:54.949241 1710720 config.go:182] Loaded profile config "functional-451422": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0109 00:12:54.949876 1710720 cli_runner.go:164] Run: docker container inspect functional-451422 --format={{.State.Status}}
I0109 00:12:54.982199 1710720 ssh_runner.go:195] Run: systemctl --version
I0109 00:12:54.982261 1710720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-451422
I0109 00:12:55.005556 1710720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34379 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/functional-451422/id_rsa Username:docker}
I0109 00:12:55.112661 1710720 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-451422 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | 9961cbceaf234 | 117MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 3ca3ca488cf13 | 70MB   |
| docker.io/library/nginx                 | alpine             | 74077e780ec71 | 45.3MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | 05c284c929889 | 59.3MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| gcr.io/google-containers/addon-resizer  | functional-451422  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| docker.io/library/nginx                 | latest             | 8aea65d81da20 | 196MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 04b4c447bb9d4 | 121MB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-451422 image ls --format table --alsologtostderr:
I0109 00:12:55.703310 1710862 out.go:296] Setting OutFile to fd 1 ...
I0109 00:12:55.703523 1710862 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0109 00:12:55.703605 1710862 out.go:309] Setting ErrFile to fd 2...
I0109 00:12:55.703637 1710862 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0109 00:12:55.704137 1710862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-1678586/.minikube/bin
I0109 00:12:55.704981 1710862 config.go:182] Loaded profile config "functional-451422": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0109 00:12:55.705184 1710862 config.go:182] Loaded profile config "functional-451422": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0109 00:12:55.705754 1710862 cli_runner.go:164] Run: docker container inspect functional-451422 --format={{.State.Status}}
I0109 00:12:55.728853 1710862 ssh_runner.go:195] Run: systemctl --version
I0109 00:12:55.728906 1710862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-451422
I0109 00:12:55.754981 1710862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34379 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/functional-451422/id_rsa Username:docker}
I0109 00:12:55.856319 1710862 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-451422 image ls --format json --alsologtostderr:
[{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430b
c550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"rep
oTags":["gcr.io/google-containers/addon-resizer:functional-451422"],"size":"34114467"},{"id":"8aea65d81da202cf886d7766c7f2691bb9e363c6b5d9b1f5d9ddaaa4bc1e90c2","repoDigests":["docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026","docker.io/library/nginx@sha256:73a06b3a2577448f9acc23502a0cb4d41919da9cc5035e66b0a9a09715397684"],"repoTags":["docker.io/library/nginx:latest"],"size":"196113558"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb","registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"121119694"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"117252916"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7
300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f8
3da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"59253556"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"74077e780ec714353793e0ef5677b55d7396aa1d31e77ec899f54842f7142448","repoDigests":["docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb","docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45330189"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:460f
b2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"69992343"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-451422 image ls --format json --alsologtostderr:
I0109 00:12:55.366519 1710781 out.go:296] Setting OutFile to fd 1 ...
I0109 00:12:55.366800 1710781 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0109 00:12:55.366827 1710781 out.go:309] Setting ErrFile to fd 2...
I0109 00:12:55.366846 1710781 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0109 00:12:55.367167 1710781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-1678586/.minikube/bin
I0109 00:12:55.367913 1710781 config.go:182] Loaded profile config "functional-451422": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0109 00:12:55.368128 1710781 config.go:182] Loaded profile config "functional-451422": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0109 00:12:55.368718 1710781 cli_runner.go:164] Run: docker container inspect functional-451422 --format={{.State.Status}}
I0109 00:12:55.398136 1710781 ssh_runner.go:195] Run: systemctl --version
I0109 00:12:55.398190 1710781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-451422
I0109 00:12:55.419180 1710781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34379 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/functional-451422/id_rsa Username:docker}
I0109 00:12:55.528078 1710781 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-451422 image ls --format yaml --alsologtostderr:
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: 8aea65d81da202cf886d7766c7f2691bb9e363c6b5d9b1f5d9ddaaa4bc1e90c2
repoDigests:
- docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026
- docker.io/library/nginx@sha256:73a06b3a2577448f9acc23502a0cb4d41919da9cc5035e66b0a9a09715397684
repoTags:
- docker.io/library/nginx:latest
size: "196113558"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-451422
size: "34114467"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "59253556"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 74077e780ec714353793e0ef5677b55d7396aa1d31e77ec899f54842f7142448
repoDigests:
- docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb
- docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59
repoTags:
- docker.io/library/nginx:alpine
size: "45330189"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
- registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "121119694"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "117252916"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "69992343"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-451422 image ls --format yaml --alsologtostderr:
I0109 00:12:54.973294 1710721 out.go:296] Setting OutFile to fd 1 ...
I0109 00:12:54.973563 1710721 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0109 00:12:54.973591 1710721 out.go:309] Setting ErrFile to fd 2...
I0109 00:12:54.973614 1710721 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0109 00:12:54.974316 1710721 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-1678586/.minikube/bin
I0109 00:12:54.975481 1710721 config.go:182] Loaded profile config "functional-451422": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0109 00:12:54.975726 1710721 config.go:182] Loaded profile config "functional-451422": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0109 00:12:54.976685 1710721 cli_runner.go:164] Run: docker container inspect functional-451422 --format={{.State.Status}}
I0109 00:12:54.998295 1710721 ssh_runner.go:195] Run: systemctl --version
I0109 00:12:54.998353 1710721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-451422
I0109 00:12:55.046669 1710721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34379 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/functional-451422/id_rsa Username:docker}
I0109 00:12:55.157028 1710721 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-451422 ssh pgrep buildkitd: exit status 1 (373.546869ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 image build -t localhost/my-image:functional-451422 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-451422 image build -t localhost/my-image:functional-451422 testdata/build --alsologtostderr: (2.664850975s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-451422 image build -t localhost/my-image:functional-451422 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 39156e1a93c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-451422
--> 9822d45ce66
Successfully tagged localhost/my-image:functional-451422
9822d45ce6616b43f6a481834c18e309d93581b7cf647ce06c07b8b958e1435c
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-451422 image build -t localhost/my-image:functional-451422 testdata/build --alsologtostderr:
I0109 00:12:55.653629 1710856 out.go:296] Setting OutFile to fd 1 ...
I0109 00:12:55.655290 1710856 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0109 00:12:55.655305 1710856 out.go:309] Setting ErrFile to fd 2...
I0109 00:12:55.655312 1710856 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0109 00:12:55.655607 1710856 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-1678586/.minikube/bin
I0109 00:12:55.656360 1710856 config.go:182] Loaded profile config "functional-451422": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0109 00:12:55.657752 1710856 config.go:182] Loaded profile config "functional-451422": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0109 00:12:55.658483 1710856 cli_runner.go:164] Run: docker container inspect functional-451422 --format={{.State.Status}}
I0109 00:12:55.689506 1710856 ssh_runner.go:195] Run: systemctl --version
I0109 00:12:55.689561 1710856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-451422
I0109 00:12:55.714244 1710856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34379 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/functional-451422/id_rsa Username:docker}
I0109 00:12:55.820454 1710856 build_images.go:151] Building image from path: /tmp/build.3030495249.tar
I0109 00:12:55.820593 1710856 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0109 00:12:55.832730 1710856 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3030495249.tar
I0109 00:12:55.837391 1710856 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3030495249.tar: stat -c "%s %y" /var/lib/minikube/build/build.3030495249.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3030495249.tar': No such file or directory
I0109 00:12:55.837421 1710856 ssh_runner.go:362] scp /tmp/build.3030495249.tar --> /var/lib/minikube/build/build.3030495249.tar (3072 bytes)
I0109 00:12:55.872997 1710856 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3030495249
I0109 00:12:55.893038 1710856 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3030495249 -xf /var/lib/minikube/build/build.3030495249.tar
I0109 00:12:55.910368 1710856 crio.go:297] Building image: /var/lib/minikube/build/build.3030495249
I0109 00:12:55.910528 1710856 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-451422 /var/lib/minikube/build/build.3030495249 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0109 00:12:58.177955 1710856 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-451422 /var/lib/minikube/build/build.3030495249 --cgroup-manager=cgroupfs: (2.267394716s)
I0109 00:12:58.178028 1710856 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3030495249
I0109 00:12:58.188682 1710856 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3030495249.tar
I0109 00:12:58.200132 1710856 build_images.go:207] Built localhost/my-image:functional-451422 from /tmp/build.3030495249.tar
I0109 00:12:58.200161 1710856 build_images.go:123] succeeded building to: functional-451422
I0109 00:12:58.200166 1710856 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.619939134s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-451422
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 image load --daemon gcr.io/google-containers/addon-resizer:functional-451422 --alsologtostderr
2024/01/09 00:12:38 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-451422 image load --daemon gcr.io/google-containers/addon-resizer:functional-451422 --alsologtostderr: (5.213574125s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 image load --daemon gcr.io/google-containers/addon-resizer:functional-451422 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-451422 image load --daemon gcr.io/google-containers/addon-resizer:functional-451422 --alsologtostderr: (3.24373009s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.56s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.025903167s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-451422
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 image load --daemon gcr.io/google-containers/addon-resizer:functional-451422 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-451422 image load --daemon gcr.io/google-containers/addon-resizer:functional-451422 --alsologtostderr: (3.619531555s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 image save gcr.io/google-containers/addon-resizer:functional-451422 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 image rm gcr.io/google-containers/addon-resizer:functional-451422 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-451422 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.064247727s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-451422
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-451422 image save --daemon gcr.io/google-containers/addon-resizer:functional-451422 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-451422
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-451422
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-451422
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-451422
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (97.06s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-037418 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0109 00:14:20.951243 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-037418 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m37.056931453s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (97.06s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.43s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-037418 addons enable ingress --alsologtostderr -v=5
E0109 00:14:48.637215 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-037418 addons enable ingress --alsologtostderr -v=5: (11.432983148s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.43s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.76s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-037418 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.76s)

                                                
                                    
x
+
TestJSONOutput/start/Command (90.72s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-736569 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0109 00:18:17.178606 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
E0109 00:19:20.951231 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-736569 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m30.721602537s)
--- PASS: TestJSONOutput/start/Command (90.72s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.8s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-736569 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.80s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-736569 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-736569 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-736569 --output=json --user=testUser: (5.870638229s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-959282 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-959282 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (93.240725ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c796be3d-c5b5-4a3a-8598-ef75aea1d9a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-959282] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ff958681-6743-4156-ab1e-6bebcfd8797c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17830"}}
	{"specversion":"1.0","id":"1d6818d0-b99b-4ecf-a3b6-f077bee238dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6ed283df-2fbb-4fc5-9823-fa92be859e4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17830-1678586/kubeconfig"}}
	{"specversion":"1.0","id":"1a4446df-e193-40c3-8bc1-b58a76ba03a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-1678586/.minikube"}}
	{"specversion":"1.0","id":"2ffe53fd-2b3f-4cab-a6ef-55050ec5a78e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"624f4521-b329-4dd0-bb09-3c28c6a8a053","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9fefa435-506f-4097-aadc-8fc21fb9d401","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-959282" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-959282
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (48.6s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-106272 --network=
E0109 00:19:39.099085 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
E0109 00:19:50.738658 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
E0109 00:19:50.744782 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
E0109 00:19:50.755354 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
E0109 00:19:50.776264 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
E0109 00:19:50.816696 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
E0109 00:19:50.897076 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
E0109 00:19:51.057518 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
E0109 00:19:51.378134 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
E0109 00:19:52.019104 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
E0109 00:19:53.299643 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
E0109 00:19:55.859894 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
E0109 00:20:00.980972 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
E0109 00:20:11.221515 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-106272 --network=: (46.486550972s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-106272" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-106272
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-106272: (2.085688688s)
--- PASS: TestKicCustomNetwork/create_custom_network (48.60s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.34s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-463454 --network=bridge
E0109 00:20:31.701720 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-463454 --network=bridge: (33.330831336s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-463454" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-463454
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-463454: (1.980477232s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.34s)

                                                
                                    
x
+
TestKicExistingNetwork (38.42s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-867632 --network=existing-network
E0109 00:21:12.661964 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-867632 --network=existing-network: (36.289763863s)
helpers_test.go:175: Cleaning up "existing-network-867632" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-867632
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-867632: (1.96906562s)
--- PASS: TestKicExistingNetwork (38.42s)

                                                
                                    
x
+
TestKicCustomSubnet (33.92s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-695365 --subnet=192.168.60.0/24
E0109 00:21:55.257332 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-695365 --subnet=192.168.60.0/24: (31.823644224s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-695365 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-695365" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-695365
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-695365: (2.073023642s)
--- PASS: TestKicCustomSubnet (33.92s)

                                                
                                    
x
+
TestKicStaticIP (33.25s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-530690 --static-ip=192.168.200.200
E0109 00:22:22.939282 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
E0109 00:22:34.582493 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-530690 --static-ip=192.168.200.200: (30.899392425s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-530690 ip
helpers_test.go:175: Cleaning up "static-ip-530690" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-530690
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-530690: (2.147138569s)
--- PASS: TestKicStaticIP (33.25s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (71.3s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-871995 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-871995 --driver=docker  --container-runtime=crio: (33.287657914s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-874678 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-874678 --driver=docker  --container-runtime=crio: (32.735583257s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-871995
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-874678
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-874678" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-874678
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-874678: (2.002980175s)
helpers_test.go:175: Cleaning up "first-871995" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-871995
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-871995: (1.966407684s)
--- PASS: TestMinikubeProfile (71.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-325094 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-325094 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.733819866s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-325094 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-327096 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-327096 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.181415586s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-327096 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-325094 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-325094 --alsologtostderr -v=5: (1.6783361s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-327096 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-327096
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-327096: (1.232867556s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.8s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-327096
E0109 00:24:20.950977 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-327096: (6.800991421s)
--- PASS: TestMountStart/serial/RestartStopped (7.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-327096 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (99.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-979047 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0109 00:24:50.738704 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
E0109 00:25:18.423712 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
E0109 00:25:43.998159 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-979047 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m39.052413282s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (99.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-979047 -- rollout status deployment/busybox: (3.215717253s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- exec busybox-5bc68d56bd-4v5vc -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- exec busybox-5bc68d56bd-bxf99 -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- exec busybox-5bc68d56bd-4v5vc -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- exec busybox-5bc68d56bd-bxf99 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- exec busybox-5bc68d56bd-4v5vc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- exec busybox-5bc68d56bd-bxf99 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.35s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (49.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-979047 -v 3 --alsologtostderr
E0109 00:26:55.257403 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-979047 -v 3 --alsologtostderr: (48.806721929s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (49.56s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-979047 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 cp testdata/cp-test.txt multinode-979047:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 cp multinode-979047:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3920498493/001/cp-test_multinode-979047.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 cp multinode-979047:/home/docker/cp-test.txt multinode-979047-m02:/home/docker/cp-test_multinode-979047_multinode-979047-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047-m02 "sudo cat /home/docker/cp-test_multinode-979047_multinode-979047-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 cp multinode-979047:/home/docker/cp-test.txt multinode-979047-m03:/home/docker/cp-test_multinode-979047_multinode-979047-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047-m03 "sudo cat /home/docker/cp-test_multinode-979047_multinode-979047-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 cp testdata/cp-test.txt multinode-979047-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 cp multinode-979047-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3920498493/001/cp-test_multinode-979047-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 cp multinode-979047-m02:/home/docker/cp-test.txt multinode-979047:/home/docker/cp-test_multinode-979047-m02_multinode-979047.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047 "sudo cat /home/docker/cp-test_multinode-979047-m02_multinode-979047.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 cp multinode-979047-m02:/home/docker/cp-test.txt multinode-979047-m03:/home/docker/cp-test_multinode-979047-m02_multinode-979047-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047-m03 "sudo cat /home/docker/cp-test_multinode-979047-m02_multinode-979047-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 cp testdata/cp-test.txt multinode-979047-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 cp multinode-979047-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3920498493/001/cp-test_multinode-979047-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 cp multinode-979047-m03:/home/docker/cp-test.txt multinode-979047:/home/docker/cp-test_multinode-979047-m03_multinode-979047.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047 "sudo cat /home/docker/cp-test_multinode-979047-m03_multinode-979047.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 cp multinode-979047-m03:/home/docker/cp-test.txt multinode-979047-m02:/home/docker/cp-test_multinode-979047-m03_multinode-979047-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047-m02 "sudo cat /home/docker/cp-test_multinode-979047-m03_multinode-979047-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-979047 node stop m03: (1.226287832s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-979047 status: exit status 7 (615.028314ms)

                                                
                                                
-- stdout --
	multinode-979047
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-979047-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-979047-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-979047 status --alsologtostderr: exit status 7 (573.223376ms)

                                                
                                                
-- stdout --
	multinode-979047
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-979047-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-979047-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0109 00:27:18.967914 1757200 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:27:18.968222 1757200 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:27:18.968251 1757200 out.go:309] Setting ErrFile to fd 2...
	I0109 00:27:18.968271 1757200 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:27:18.968564 1757200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-1678586/.minikube/bin
	I0109 00:27:18.968786 1757200 out.go:303] Setting JSON to false
	I0109 00:27:18.968873 1757200 mustload.go:65] Loading cluster: multinode-979047
	I0109 00:27:18.968978 1757200 notify.go:220] Checking for updates...
	I0109 00:27:18.969389 1757200 config.go:182] Loaded profile config "multinode-979047": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:27:18.969423 1757200 status.go:255] checking status of multinode-979047 ...
	I0109 00:27:18.970284 1757200 cli_runner.go:164] Run: docker container inspect multinode-979047 --format={{.State.Status}}
	I0109 00:27:18.989780 1757200 status.go:330] multinode-979047 host status = "Running" (err=<nil>)
	I0109 00:27:18.989801 1757200 host.go:66] Checking if "multinode-979047" exists ...
	I0109 00:27:18.990092 1757200 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-979047
	I0109 00:27:19.007938 1757200 host.go:66] Checking if "multinode-979047" exists ...
	I0109 00:27:19.008288 1757200 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0109 00:27:19.008340 1757200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-979047
	I0109 00:27:19.034650 1757200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34444 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/multinode-979047/id_rsa Username:docker}
	I0109 00:27:19.136966 1757200 ssh_runner.go:195] Run: systemctl --version
	I0109 00:27:19.142315 1757200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:27:19.156076 1757200 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0109 00:27:19.226201 1757200 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2024-01-09 00:27:19.216607706 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0109 00:27:19.226819 1757200 kubeconfig.go:92] found "multinode-979047" server: "https://192.168.58.2:8443"
	I0109 00:27:19.226850 1757200 api_server.go:166] Checking apiserver status ...
	I0109 00:27:19.226896 1757200 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0109 00:27:19.239794 1757200 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1245/cgroup
	I0109 00:27:19.250987 1757200 api_server.go:182] apiserver freezer: "7:freezer:/docker/4ab6ef7ad13d9d90167e1fb36a66ae45b1b7b7b23777e167f992d915692cf603/crio/crio-aba17a1c7ee3696cb52e53b0da3af52e340bd803c5d311d87bbfc1c884794fbf"
	I0109 00:27:19.251066 1757200 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4ab6ef7ad13d9d90167e1fb36a66ae45b1b7b7b23777e167f992d915692cf603/crio/crio-aba17a1c7ee3696cb52e53b0da3af52e340bd803c5d311d87bbfc1c884794fbf/freezer.state
	I0109 00:27:19.261272 1757200 api_server.go:204] freezer state: "THAWED"
	I0109 00:27:19.261305 1757200 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0109 00:27:19.270015 1757200 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0109 00:27:19.270042 1757200 status.go:421] multinode-979047 apiserver status = Running (err=<nil>)
	I0109 00:27:19.270056 1757200 status.go:257] multinode-979047 status: &{Name:multinode-979047 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0109 00:27:19.270072 1757200 status.go:255] checking status of multinode-979047-m02 ...
	I0109 00:27:19.270387 1757200 cli_runner.go:164] Run: docker container inspect multinode-979047-m02 --format={{.State.Status}}
	I0109 00:27:19.288104 1757200 status.go:330] multinode-979047-m02 host status = "Running" (err=<nil>)
	I0109 00:27:19.288129 1757200 host.go:66] Checking if "multinode-979047-m02" exists ...
	I0109 00:27:19.288430 1757200 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-979047-m02
	I0109 00:27:19.305879 1757200 host.go:66] Checking if "multinode-979047-m02" exists ...
	I0109 00:27:19.306249 1757200 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0109 00:27:19.306308 1757200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-979047-m02
	I0109 00:27:19.324770 1757200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34449 SSHKeyPath:/home/jenkins/minikube-integration/17830-1678586/.minikube/machines/multinode-979047-m02/id_rsa Username:docker}
	I0109 00:27:19.424600 1757200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0109 00:27:19.440680 1757200 status.go:257] multinode-979047-m02 status: &{Name:multinode-979047-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0109 00:27:19.440714 1757200 status.go:255] checking status of multinode-979047-m03 ...
	I0109 00:27:19.441070 1757200 cli_runner.go:164] Run: docker container inspect multinode-979047-m03 --format={{.State.Status}}
	I0109 00:27:19.464896 1757200 status.go:330] multinode-979047-m03 host status = "Stopped" (err=<nil>)
	I0109 00:27:19.464922 1757200 status.go:343] host is not running, skipping remaining checks
	I0109 00:27:19.464930 1757200 status.go:257] multinode-979047-m03 status: &{Name:multinode-979047-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-979047 node start m03 --alsologtostderr: (12.722347152s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.58s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (122.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-979047
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-979047
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-979047: (24.940600306s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-979047 --wait=true -v=8 --alsologtostderr
E0109 00:29:20.951104 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-979047 --wait=true -v=8 --alsologtostderr: (1m37.662915299s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-979047
--- PASS: TestMultiNode/serial/RestartKeepsNodes (122.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-979047 node delete m03: (4.404658836s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 stop
E0109 00:29:50.739048 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-979047 stop: (23.75484392s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-979047 status: exit status 7 (102.71451ms)

                                                
                                                
-- stdout --
	multinode-979047
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-979047-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-979047 status --alsologtostderr: exit status 7 (104.304152ms)

                                                
                                                
-- stdout --
	multinode-979047
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-979047-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0109 00:30:04.923377 1765364 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:30:04.923513 1765364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:30:04.923523 1765364 out.go:309] Setting ErrFile to fd 2...
	I0109 00:30:04.923529 1765364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:30:04.923792 1765364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-1678586/.minikube/bin
	I0109 00:30:04.923972 1765364 out.go:303] Setting JSON to false
	I0109 00:30:04.924054 1765364 mustload.go:65] Loading cluster: multinode-979047
	I0109 00:30:04.924140 1765364 notify.go:220] Checking for updates...
	I0109 00:30:04.924471 1765364 config.go:182] Loaded profile config "multinode-979047": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0109 00:30:04.924489 1765364 status.go:255] checking status of multinode-979047 ...
	I0109 00:30:04.925071 1765364 cli_runner.go:164] Run: docker container inspect multinode-979047 --format={{.State.Status}}
	I0109 00:30:04.943226 1765364 status.go:330] multinode-979047 host status = "Stopped" (err=<nil>)
	I0109 00:30:04.943250 1765364 status.go:343] host is not running, skipping remaining checks
	I0109 00:30:04.943257 1765364 status.go:257] multinode-979047 status: &{Name:multinode-979047 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0109 00:30:04.943296 1765364 status.go:255] checking status of multinode-979047-m02 ...
	I0109 00:30:04.943688 1765364 cli_runner.go:164] Run: docker container inspect multinode-979047-m02 --format={{.State.Status}}
	I0109 00:30:04.961208 1765364 status.go:330] multinode-979047-m02 host status = "Stopped" (err=<nil>)
	I0109 00:30:04.961230 1765364 status.go:343] host is not running, skipping remaining checks
	I0109 00:30:04.961237 1765364 status.go:257] multinode-979047-m02 status: &{Name:multinode-979047-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (86.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-979047 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-979047 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m25.161862512s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (86.01s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-979047
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-979047-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-979047-m02 --driver=docker  --container-runtime=crio: exit status 14 (109.116658ms)

                                                
                                                
-- stdout --
	* [multinode-979047-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-1678586/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-1678586/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-979047-m02' is duplicated with machine name 'multinode-979047-m02' in profile 'multinode-979047'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-979047-m03 --driver=docker  --container-runtime=crio
E0109 00:31:55.256471 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-979047-m03 --driver=docker  --container-runtime=crio: (33.035763625s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-979047
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-979047: exit status 80 (366.213782ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-979047
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-979047-m03 already exists in multinode-979047-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-979047-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-979047-m03: (2.003165439s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.59s)

                                                
                                    
x
+
TestPreload (171.42s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-483637 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0109 00:33:18.300023 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-483637 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m25.724953066s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-483637 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-483637 image pull gcr.io/k8s-minikube/busybox: (2.843769544s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-483637
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-483637: (5.812173395s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-483637 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0109 00:34:20.951233 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
E0109 00:34:50.738948 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-483637 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m14.389353116s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-483637 image list
helpers_test.go:175: Cleaning up "test-preload-483637" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-483637
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-483637: (2.380831332s)
--- PASS: TestPreload (171.42s)

                                                
                                    
x
+
TestInsufficientStorage (10.75s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-414877 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-414877 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.164219488s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cdc2f96a-db03-4b05-a109-cf88e5f7a6c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-414877] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0e1b8430-b3d4-4187-aed1-c40e3f3f563a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17830"}}
	{"specversion":"1.0","id":"0486ccb9-5004-4057-95f2-acc458ce7589","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e84aa90a-3473-4605-b828-58339fa03f79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17830-1678586/kubeconfig"}}
	{"specversion":"1.0","id":"0654a694-cce3-4534-8cfd-053c270d5b35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-1678586/.minikube"}}
	{"specversion":"1.0","id":"bcc2ae8a-f4d3-40ce-9703-3ebafe770248","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"c2da0a6b-2c42-40d9-aaae-1bcc7f1f4254","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f1bcf73f-063f-41bd-ab73-04b55c7e9552","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ba880a90-053b-4a10-9665-70b58bb84275","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"6553d804-bdfa-4e27-b5bc-d8f4b8a02947","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b6cbacb0-89df-4989-82b4-14659c558ebc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e3d9a5ca-9cf9-4352-bfed-9c787127621d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-414877 in cluster insufficient-storage-414877","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5de82d34-03dd-4838-a1cf-848b396768b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1704751654-17830 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"c6274ac1-adce-482f-9132-d7121807e8a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2704468d-f92d-4290-85b2-19a3697b123f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-414877 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-414877 --output=json --layout=cluster: exit status 7 (331.363727ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-414877","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-414877","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0109 00:35:51.295439 1781635 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-414877" does not appear in /home/jenkins/minikube-integration/17830-1678586/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-414877 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-414877 --output=json --layout=cluster: exit status 7 (321.623596ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-414877","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-414877","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0109 00:35:51.617949 1781690 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-414877" does not appear in /home/jenkins/minikube-integration/17830-1678586/kubeconfig
	E0109 00:35:51.629815 1781690 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/insufficient-storage-414877/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-414877" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-414877
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-414877: (1.934030207s)
--- PASS: TestInsufficientStorage (10.75s)

                                                
                                    
x
+
TestKubernetesUpgrade (128.66s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-899424 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-899424 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m3.093624162s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-899424
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-899424: (1.552446655s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-899424 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-899424 status --format={{.Host}}: exit status 7 (143.105542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-899424 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0109 00:39:20.950485 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-899424 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.443329811s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-899424 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-899424 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-899424 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (172.90088ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-899424] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-1678586/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-1678586/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-899424
	    minikube start -p kubernetes-upgrade-899424 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8994242 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-899424 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-899424 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-899424 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.24787873s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-899424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-899424
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-899424: (2.814622446s)
--- PASS: TestKubernetesUpgrade (128.66s)

                                                
                                    
x
+
TestPause/serial/Start (83.67s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-938398 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-938398 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m23.66683584s)
--- PASS: TestPause/serial/Start (83.67s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (34.57s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-938398 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-938398 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.552416294s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (34.57s)

                                                
                                    
x
+
TestPause/serial/Pause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-938398 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.80s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-938398 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-938398 --output=json --layout=cluster: exit status 2 (353.527301ms)

                                                
                                                
-- stdout --
	{"Name":"pause-938398","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-938398","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.35s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-938398 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.79s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.02s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-938398 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-938398 --alsologtostderr -v=5: (1.016576693s)
--- PASS: TestPause/serial/PauseAgain (1.02s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.79s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-938398 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-938398 --alsologtostderr -v=5: (2.790173203s)
--- PASS: TestPause/serial/DeletePaused (2.79s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.18s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-938398
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-938398: exit status 1 (23.416853ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-938398: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-389816
E0109 00:39:50.739264 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-791575 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-791575 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (110.731408ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-791575] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-1678586/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-1678586/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-791575 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-791575 --driver=docker  --container-runtime=crio: (40.446922867s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-791575 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (20.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-791575 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-791575 --no-kubernetes --driver=docker  --container-runtime=crio: (17.498790003s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-791575 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-791575 status -o json: exit status 2 (454.226286ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-791575","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-791575
E0109 00:41:55.258687 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-791575: (2.215938697s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (20.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-394167 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-394167 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (333.350927ms)

                                                
                                                
-- stdout --
	* [false-394167] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-1678586/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-1678586/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0109 00:41:52.353830 1812475 out.go:296] Setting OutFile to fd 1 ...
	I0109 00:41:52.354197 1812475 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:41:52.354204 1812475 out.go:309] Setting ErrFile to fd 2...
	I0109 00:41:52.354210 1812475 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0109 00:41:52.354513 1812475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-1678586/.minikube/bin
	I0109 00:41:52.354965 1812475 out.go:303] Setting JSON to false
	I0109 00:41:52.355920 1812475 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":26655,"bootTime":1704734258,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0109 00:41:52.356005 1812475 start.go:138] virtualization:  
	I0109 00:41:52.359101 1812475 out.go:177] * [false-394167] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0109 00:41:52.362296 1812475 out.go:177]   - MINIKUBE_LOCATION=17830
	I0109 00:41:52.365641 1812475 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0109 00:41:52.362475 1812475 notify.go:220] Checking for updates...
	I0109 00:41:52.371256 1812475 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-1678586/kubeconfig
	I0109 00:41:52.373901 1812475 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-1678586/.minikube
	I0109 00:41:52.376283 1812475 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0109 00:41:52.378796 1812475 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0109 00:41:52.381728 1812475 config.go:182] Loaded profile config "NoKubernetes-791575": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0109 00:41:52.381893 1812475 driver.go:392] Setting default libvirt URI to qemu:///system
	I0109 00:41:52.417847 1812475 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0109 00:41:52.417966 1812475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0109 00:41:52.562465 1812475 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-09 00:41:52.552763497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0109 00:41:52.562573 1812475 docker.go:295] overlay module found
	I0109 00:41:52.566305 1812475 out.go:177] * Using the docker driver based on user configuration
	I0109 00:41:52.568553 1812475 start.go:298] selected driver: docker
	I0109 00:41:52.568571 1812475 start.go:902] validating driver "docker" against <nil>
	I0109 00:41:52.568585 1812475 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0109 00:41:52.572102 1812475 out.go:177] 
	W0109 00:41:52.574625 1812475 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0109 00:41:52.576618 1812475 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-394167 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-394167

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-394167

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-394167

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-394167

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-394167

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-394167

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-394167

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-394167

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-394167

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-394167

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-394167

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-394167" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-394167" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 09 Jan 2024 00:41:34 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: NoKubernetes-791575
contexts:
- context:
cluster: NoKubernetes-791575
extensions:
- extension:
last-update: Tue, 09 Jan 2024 00:41:34 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: NoKubernetes-791575
name: NoKubernetes-791575
current-context: NoKubernetes-791575
kind: Config
preferences: {}
users:
- name: NoKubernetes-791575
user:
client-certificate: /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/NoKubernetes-791575/client.crt
client-key: /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/NoKubernetes-791575/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-394167

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394167"

                                                
                                                
----------------------- debugLogs end: false-394167 [took: 4.917726578s] --------------------------------
helpers_test.go:175: Cleaning up "false-394167" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-394167
--- PASS: TestNetworkPlugins/group/false (5.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-791575 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-791575 --no-kubernetes --driver=docker  --container-runtime=crio: (7.118453162s)
--- PASS: TestNoKubernetes/serial/Start (7.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-791575 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-791575 "sudo systemctl is-active --quiet service kubelet": exit status 1 (410.829663ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-791575
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-791575: (1.318669003s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-791575 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-791575 --driver=docker  --container-runtime=crio: (8.18579835s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-791575 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-791575 "sudo systemctl is-active --quiet service kubelet": exit status 1 (382.650433ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (139.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-737958 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0109 00:44:20.951253 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
E0109 00:44:50.738662 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-737958 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m19.818560442s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (139.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-737958 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [29e72f9b-bd42-4d50-911b-6851a0a10be7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [29e72f9b-bd42-4d50-911b-6851a0a10be7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003746096s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-737958 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-737958 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-737958 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.474596458s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-737958 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-737958 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-737958 --alsologtostderr -v=3: (12.679364793s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (73.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-150276 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-150276 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m13.025892842s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (73.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-737958 -n old-k8s-version-737958
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-737958 -n old-k8s-version-737958: exit status 7 (124.111709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-737958 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (441.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-737958 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0109 00:46:55.256615 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-737958 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m20.986165334s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-737958 -n old-k8s-version-737958
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (441.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-150276 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b1071b6d-231e-4695-8310-d80f6ee919b5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b1071b6d-231e-4695-8310-d80f6ee919b5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004303932s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-150276 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-150276 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-150276 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.022063334s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-150276 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-150276 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-150276 --alsologtostderr -v=3: (12.019662674s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-150276 -n no-preload-150276
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-150276 -n no-preload-150276: exit status 7 (92.232571ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-150276 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (624.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-150276 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0109 00:49:20.950650 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
E0109 00:49:50.739651 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
E0109 00:49:58.300492 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
E0109 00:51:55.257035 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
E0109 00:52:53.784476 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-150276 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (10m23.845232396s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-150276 -n no-preload-150276
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (624.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-m77kx" [3a221444-977b-4516-a44e-0c237f6e3ec9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003678271s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-m77kx" [3a221444-977b-4516-a44e-0c237f6e3ec9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003194185s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-737958 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-737958 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-737958 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-737958 -n old-k8s-version-737958
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-737958 -n old-k8s-version-737958: exit status 2 (374.836648ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-737958 -n old-k8s-version-737958
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-737958 -n old-k8s-version-737958: exit status 2 (362.193461ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-737958 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-737958 -n old-k8s-version-737958
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-737958 -n old-k8s-version-737958
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (80.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-290975 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0109 00:54:20.951208 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
E0109 00:54:50.739288 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-290975 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m20.318444038s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (80.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-290975 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5fddeeb0-c6ea-4129-b8d8-7fbdf9d32e86] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5fddeeb0-c6ea-4129-b8d8-7fbdf9d32e86] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004115073s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-290975 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-290975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-290975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.106113753s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-290975 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-290975 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-290975 --alsologtostderr -v=3: (12.062779946s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-290975 -n embed-certs-290975
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-290975 -n embed-certs-290975: exit status 7 (86.926046ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-290975 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (351.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-290975 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0109 00:55:49.502659 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/old-k8s-version-737958/client.crt: no such file or directory
E0109 00:55:49.507911 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/old-k8s-version-737958/client.crt: no such file or directory
E0109 00:55:49.518208 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/old-k8s-version-737958/client.crt: no such file or directory
E0109 00:55:49.538502 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/old-k8s-version-737958/client.crt: no such file or directory
E0109 00:55:49.578836 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/old-k8s-version-737958/client.crt: no such file or directory
E0109 00:55:49.659408 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/old-k8s-version-737958/client.crt: no such file or directory
E0109 00:55:49.819818 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/old-k8s-version-737958/client.crt: no such file or directory
E0109 00:55:50.140329 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/old-k8s-version-737958/client.crt: no such file or directory
E0109 00:55:50.781989 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/old-k8s-version-737958/client.crt: no such file or directory
E0109 00:55:52.062935 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/old-k8s-version-737958/client.crt: no such file or directory
E0109 00:55:54.623475 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/old-k8s-version-737958/client.crt: no such file or directory
E0109 00:55:59.744500 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/old-k8s-version-737958/client.crt: no such file or directory
E0109 00:56:09.985482 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/old-k8s-version-737958/client.crt: no such file or directory
E0109 00:56:30.465740 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/old-k8s-version-737958/client.crt: no such file or directory
E0109 00:56:55.256845 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
E0109 00:57:11.425940 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/old-k8s-version-737958/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-290975 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m50.777512536s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-290975 -n embed-certs-290975
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (351.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gkf6f" [ab57cab6-8162-47fe-915c-f7d030b33d13] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004201688s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gkf6f" [ab57cab6-8162-47fe-915c-f7d030b33d13] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004103955s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-150276 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-150276 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-150276 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-150276 -n no-preload-150276
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-150276 -n no-preload-150276: exit status 2 (371.251035ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-150276 -n no-preload-150276
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-150276 -n no-preload-150276: exit status 2 (362.868325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-150276 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-150276 -n no-preload-150276
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-150276 -n no-preload-150276
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-848989 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0109 00:58:33.346296 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/old-k8s-version-737958/client.crt: no such file or directory
E0109 00:59:03.999528 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
E0109 00:59:20.951237 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-848989 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m19.67329455s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-848989 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [da6f9a8e-b175-49e8-a197-084da37a1cb2] Pending
helpers_test.go:344: "busybox" [da6f9a8e-b175-49e8-a197-084da37a1cb2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [da6f9a8e-b175-49e8-a197-084da37a1cb2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004021774s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-848989 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-848989 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0109 00:59:50.739191 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-848989 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.104583418s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-848989 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-848989 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-848989 --alsologtostderr -v=3: (11.997591671s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-848989 -n default-k8s-diff-port-848989
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-848989 -n default-k8s-diff-port-848989: exit status 7 (90.482928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-848989 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (353.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-848989 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0109 01:00:49.503051 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/old-k8s-version-737958/client.crt: no such file or directory
E0109 01:01:17.186504 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/old-k8s-version-737958/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-848989 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m52.46039699s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-848989 -n default-k8s-diff-port-848989
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (353.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-5xmtb" [da0e80b1-bf0d-403e-846a-00da065f4dc5] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-5xmtb" [da0e80b1-bf0d-403e-846a-00da065f4dc5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.003998157s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-5xmtb" [da0e80b1-bf0d-403e-846a-00da065f4dc5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00374819s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-290975 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-290975 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-290975 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-290975 -n embed-certs-290975
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-290975 -n embed-certs-290975: exit status 2 (375.56549ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-290975 -n embed-certs-290975
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-290975 -n embed-certs-290975: exit status 2 (356.450133ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-290975 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-290975 -n embed-certs-290975
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-290975 -n embed-certs-290975
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-333760 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0109 01:01:55.257240 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory
E0109 01:02:16.130304 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/no-preload-150276/client.crt: no such file or directory
E0109 01:02:16.135553 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/no-preload-150276/client.crt: no such file or directory
E0109 01:02:16.145812 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/no-preload-150276/client.crt: no such file or directory
E0109 01:02:16.166067 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/no-preload-150276/client.crt: no such file or directory
E0109 01:02:16.206316 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/no-preload-150276/client.crt: no such file or directory
E0109 01:02:16.286880 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/no-preload-150276/client.crt: no such file or directory
E0109 01:02:16.447208 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/no-preload-150276/client.crt: no such file or directory
E0109 01:02:16.767535 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/no-preload-150276/client.crt: no such file or directory
E0109 01:02:17.408379 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/no-preload-150276/client.crt: no such file or directory
E0109 01:02:18.688581 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/no-preload-150276/client.crt: no such file or directory
E0109 01:02:21.249708 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/no-preload-150276/client.crt: no such file or directory
E0109 01:02:26.370338 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/no-preload-150276/client.crt: no such file or directory
E0109 01:02:36.610964 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/no-preload-150276/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-333760 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (46.384859706s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-333760 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-333760 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.086348104s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-333760 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-333760 --alsologtostderr -v=3: (1.267576659s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-333760 -n newest-cni-333760
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-333760 -n newest-cni-333760: exit status 7 (89.458846ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-333760 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-333760 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0109 01:02:57.091998 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/no-preload-150276/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-333760 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (30.343956597s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-333760 -n newest-cni-333760
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-333760 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-333760 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-333760 -n newest-cni-333760
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-333760 -n newest-cni-333760: exit status 2 (377.757905ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-333760 -n newest-cni-333760
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-333760 -n newest-cni-333760: exit status 2 (363.470585ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-333760 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-333760 -n newest-cni-333760
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-333760 -n newest-cni-333760
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (80.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-394167 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0109 01:03:38.052792 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/no-preload-150276/client.crt: no such file or directory
E0109 01:04:20.954901 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-394167 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m20.557424703s)
--- PASS: TestNetworkPlugins/group/auto/Start (80.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-394167 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-394167 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-sl9gd" [2dd16275-f68f-4aa4-9587-9b193914d910] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-sl9gd" [2dd16275-f68f-4aa4-9587-9b193914d910] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003981795s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-394167 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-394167 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-394167 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (57.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-394167 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0109 01:05:49.503018 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/old-k8s-version-737958/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-394167 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (57.732898378s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (57.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jqw66" [6b4b30aa-d0b3-4e57-ae11-e59b27574653] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jqw66" [6b4b30aa-d0b3-4e57-ae11-e59b27574653] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.004255761s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jqw66" [6b4b30aa-d0b3-4e57-ae11-e59b27574653] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003609101s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-848989 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-848989 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2t9pj" [d6d2867a-6c3c-4b1d-aaab-a760811b5d3d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004727524s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-848989 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-848989 -n default-k8s-diff-port-848989
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-848989 -n default-k8s-diff-port-848989: exit status 2 (383.323141ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-848989 -n default-k8s-diff-port-848989
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-848989 -n default-k8s-diff-port-848989: exit status 2 (362.762741ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-848989 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-848989 -n default-k8s-diff-port-848989
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-848989 -n default-k8s-diff-port-848989
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.47s)
E0109 01:11:02.589191 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/auto-394167/client.crt: no such file or directory
E0109 01:11:02.828766 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/default-k8s-diff-port-848989/client.crt: no such file or directory
E0109 01:11:11.988714 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/kindnet-394167/client.crt: no such file or directory
E0109 01:11:11.993973 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/kindnet-394167/client.crt: no such file or directory
E0109 01:11:12.004246 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/kindnet-394167/client.crt: no such file or directory
E0109 01:11:12.024543 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/kindnet-394167/client.crt: no such file or directory
E0109 01:11:12.064829 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/kindnet-394167/client.crt: no such file or directory
E0109 01:11:12.145082 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/kindnet-394167/client.crt: no such file or directory
E0109 01:11:12.305427 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/kindnet-394167/client.crt: no such file or directory
E0109 01:11:12.625815 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/kindnet-394167/client.crt: no such file or directory
E0109 01:11:13.266891 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/kindnet-394167/client.crt: no such file or directory
E0109 01:11:14.547498 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/kindnet-394167/client.crt: no such file or directory
E0109 01:11:17.107987 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/kindnet-394167/client.crt: no such file or directory
E0109 01:11:22.228687 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/kindnet-394167/client.crt: no such file or directory
E0109 01:11:32.469072 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/kindnet-394167/client.crt: no such file or directory
E0109 01:11:52.950072 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/kindnet-394167/client.crt: no such file or directory
E0109 01:11:55.257224 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/functional-451422/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-394167 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-394167 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4nzwn" [13e646ea-008e-4286-a596-a57a1c9b7820] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4nzwn" [13e646ea-008e-4286-a596-a57a1c9b7820] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004414121s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (84.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-394167 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-394167 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m24.010280444s)
--- PASS: TestNetworkPlugins/group/calico/Start (84.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-394167 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-394167 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-394167 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (76.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-394167 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0109 01:07:16.129749 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/no-preload-150276/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-394167 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m16.316901462s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (76.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-fsrr2" [bdbb7d78-0035-4541-8f05-2c0400d0d8d8] Running
E0109 01:07:43.815474 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/no-preload-150276/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004889469s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-394167 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-394167 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jn8lg" [d5a84524-5ff1-4fc7-8416-740b61302300] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jn8lg" [d5a84524-5ff1-4fc7-8416-740b61302300] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.003704016s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-394167 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-394167 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-394167 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-394167 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-394167 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-d6ffk" [db499f9d-f3ba-4d49-8dd8-8009b76abb42] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-d6ffk" [db499f9d-f3ba-4d49-8dd8-8009b76abb42] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004234039s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-394167 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-394167 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-394167 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (93.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-394167 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-394167 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m33.956200262s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (93.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (67.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-394167 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0109 01:09:20.951135 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/addons-983119/client.crt: no such file or directory
E0109 01:09:33.785556 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
E0109 01:09:40.666102 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/auto-394167/client.crt: no such file or directory
E0109 01:09:40.671407 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/auto-394167/client.crt: no such file or directory
E0109 01:09:40.681716 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/auto-394167/client.crt: no such file or directory
E0109 01:09:40.701962 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/auto-394167/client.crt: no such file or directory
E0109 01:09:40.742372 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/auto-394167/client.crt: no such file or directory
E0109 01:09:40.822707 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/auto-394167/client.crt: no such file or directory
E0109 01:09:40.907014 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/default-k8s-diff-port-848989/client.crt: no such file or directory
E0109 01:09:40.912299 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/default-k8s-diff-port-848989/client.crt: no such file or directory
E0109 01:09:40.922522 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/default-k8s-diff-port-848989/client.crt: no such file or directory
E0109 01:09:40.942776 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/default-k8s-diff-port-848989/client.crt: no such file or directory
E0109 01:09:40.983164 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/default-k8s-diff-port-848989/client.crt: no such file or directory
E0109 01:09:40.983165 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/auto-394167/client.crt: no such file or directory
E0109 01:09:41.063740 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/default-k8s-diff-port-848989/client.crt: no such file or directory
E0109 01:09:41.224002 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/default-k8s-diff-port-848989/client.crt: no such file or directory
E0109 01:09:41.305183 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/auto-394167/client.crt: no such file or directory
E0109 01:09:41.544710 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/default-k8s-diff-port-848989/client.crt: no such file or directory
E0109 01:09:41.945999 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/auto-394167/client.crt: no such file or directory
E0109 01:09:42.185591 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/default-k8s-diff-port-848989/client.crt: no such file or directory
E0109 01:09:43.226246 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/auto-394167/client.crt: no such file or directory
E0109 01:09:43.465866 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/default-k8s-diff-port-848989/client.crt: no such file or directory
E0109 01:09:45.786398 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/auto-394167/client.crt: no such file or directory
E0109 01:09:46.026535 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/default-k8s-diff-port-848989/client.crt: no such file or directory
E0109 01:09:50.739348 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/ingress-addon-legacy-037418/client.crt: no such file or directory
E0109 01:09:50.906574 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/auto-394167/client.crt: no such file or directory
E0109 01:09:51.147236 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/default-k8s-diff-port-848989/client.crt: no such file or directory
E0109 01:10:01.147074 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/auto-394167/client.crt: no such file or directory
E0109 01:10:01.387625 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/default-k8s-diff-port-848989/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-394167 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m7.7202443s)
--- PASS: TestNetworkPlugins/group/flannel/Start (67.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-394167 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-394167 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-z59cs" [d12d6257-8c97-40ff-9215-f2f91f940020] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-z59cs" [d12d6257-8c97-40ff-9215-f2f91f940020] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004446168s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-xtzv2" [3b74a3b1-d40a-4741-ad73-6ab39ecfd1a3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004390973s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-394167 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-394167 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6rjrm" [b0495be2-4125-47de-94f9-0b6a330c6631] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6rjrm" [b0495be2-4125-47de-94f9-0b6a330c6631] Running
E0109 01:10:21.628923 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/auto-394167/client.crt: no such file or directory
E0109 01:10:21.867942 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/default-k8s-diff-port-848989/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004803214s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-394167 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-394167 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-394167 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-394167 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-394167 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-394167 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-394167 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-394167 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m28.488566521s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-394167 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-394167 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kxb6p" [6aa5e12c-86de-480d-bcc0-bf8ab4cdfabf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kxb6p" [6aa5e12c-86de-480d-bcc0-bf8ab4cdfabf] Running
E0109 01:12:12.547368 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/old-k8s-version-737958/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004147837s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-394167 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-394167 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-394167 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0109 01:12:16.130026 1683967 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/no-preload-150276/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (32/316)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.67s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-594175 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-594175" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-594175
--- SKIP: TestDownloadOnlyKic (0.67s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1786: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-240160" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-240160
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-394167 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-394167

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-394167

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-394167

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-394167

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-394167

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-394167

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-394167

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-394167

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-394167

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-394167

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-394167

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-394167" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-394167" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17830-1678586/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 09 Jan 2024 00:41:34 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: NoKubernetes-791575
contexts:
- context:
cluster: NoKubernetes-791575
extensions:
- extension:
last-update: Tue, 09 Jan 2024 00:41:34 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: NoKubernetes-791575
name: NoKubernetes-791575
current-context: NoKubernetes-791575
kind: Config
preferences: {}
users:
- name: NoKubernetes-791575
user:
client-certificate: /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/NoKubernetes-791575/client.crt
client-key: /home/jenkins/minikube-integration/17830-1678586/.minikube/profiles/NoKubernetes-791575/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-394167

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394167"

                                                
                                                
----------------------- debugLogs end: kubenet-394167 [took: 3.828039798s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-394167" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-394167
--- SKIP: TestNetworkPlugins/group/kubenet (4.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-394167 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-394167

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-394167

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-394167

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-394167

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-394167

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-394167

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-394167

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-394167

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-394167

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-394167

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-394167

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-394167" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-394167

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-394167

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-394167

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-394167

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-394167" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-394167" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-394167

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-394167" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394167"

                                                
                                                
----------------------- debugLogs end: cilium-394167 [took: 6.083280321s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-394167" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-394167
--- SKIP: TestNetworkPlugins/group/cilium (6.29s)

                                                
                                    
Copied to clipboard