Test Report: Docker_Linux_crio_arm64 17885

                    
                      b721bab7b488b5e07b471be256ee12ce84535d3b:2024-01-03:32546
                    
                

Test fail (7/310)

Order failed test Duration
35 TestAddons/parallel/Ingress 168.19
167 TestIngressAddonLegacy/serial/ValidateIngressAddons 179.21
217 TestMultiNode/serial/PingHostFrom2Pods 4.24
239 TestRunningBinaryUpgrade 81.16
242 TestMissingContainerUpgrade 184.65
254 TestStoppedBinaryUpgrade/Upgrade 99.98
265 TestPause/serial/SecondStartNoReconfiguration 54.8
x
+
TestAddons/parallel/Ingress (168.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-845596 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-845596 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-845596 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [fd6d0f6d-cd98-40ba-ba28-b14fe153c292] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [fd6d0f6d-cd98-40ba-ba28-b14fe153c292] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003534711s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-845596 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-845596 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.633593357s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-845596 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-845596 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.064225793s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-845596 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-845596 addons disable ingress-dns --alsologtostderr -v=1: (1.359166451s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-845596 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-845596 addons disable ingress --alsologtostderr -v=1: (8.08115539s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-845596
helpers_test.go:235: (dbg) docker inspect addons-845596:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6e145defabd62e3cfc48e7b8e15fede3614e7eb71c4803fe6a1baded45cc3a6f",
	        "Created": "2024-01-03T19:53:33.119023553Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 415822,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-03T19:53:33.4515096Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3bfff26a1ae256fcdf8f10a333efdefbe26edc5c1669e1cc5c973c016e44d3c4",
	        "ResolvConfPath": "/var/lib/docker/containers/6e145defabd62e3cfc48e7b8e15fede3614e7eb71c4803fe6a1baded45cc3a6f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6e145defabd62e3cfc48e7b8e15fede3614e7eb71c4803fe6a1baded45cc3a6f/hostname",
	        "HostsPath": "/var/lib/docker/containers/6e145defabd62e3cfc48e7b8e15fede3614e7eb71c4803fe6a1baded45cc3a6f/hosts",
	        "LogPath": "/var/lib/docker/containers/6e145defabd62e3cfc48e7b8e15fede3614e7eb71c4803fe6a1baded45cc3a6f/6e145defabd62e3cfc48e7b8e15fede3614e7eb71c4803fe6a1baded45cc3a6f-json.log",
	        "Name": "/addons-845596",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-845596:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-845596",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bf1c24cfff8055aec1cc78d336dbfdb554912f450b8fefc7da244cf2261c8728-init/diff:/var/lib/docker/overlay2/0cefd74c13c0ff527608d5d1778b7a3893c62167f91a1554bd1fa9cb8110135e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bf1c24cfff8055aec1cc78d336dbfdb554912f450b8fefc7da244cf2261c8728/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bf1c24cfff8055aec1cc78d336dbfdb554912f450b8fefc7da244cf2261c8728/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bf1c24cfff8055aec1cc78d336dbfdb554912f450b8fefc7da244cf2261c8728/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-845596",
	                "Source": "/var/lib/docker/volumes/addons-845596/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-845596",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-845596",
	                "name.minikube.sigs.k8s.io": "addons-845596",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b3a90611144c299241be02691c3486696fdc3950118a7f2b82e942a955329891",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b3a90611144c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-845596": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6e145defabd6",
	                        "addons-845596"
	                    ],
	                    "NetworkID": "9c7ea8bc57de9d3afe83ab2b8ceaf32b869248604f3eaf12dbd7eda2f954cb2d",
	                    "EndpointID": "8a76028e1d192d48dbd05209eb823899569c2ec8d0e1fbfe789dfe1d1f1b72e2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-845596 -n addons-845596
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-845596 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-845596 logs -n 25: (1.614046992s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 03 Jan 24 19:53 UTC | 03 Jan 24 19:53 UTC |
	| delete  | -p download-only-684862                                                                     | download-only-684862   | jenkins | v1.32.0 | 03 Jan 24 19:53 UTC | 03 Jan 24 19:53 UTC |
	| delete  | -p download-only-684862                                                                     | download-only-684862   | jenkins | v1.32.0 | 03 Jan 24 19:53 UTC | 03 Jan 24 19:53 UTC |
	| start   | --download-only -p                                                                          | download-docker-213592 | jenkins | v1.32.0 | 03 Jan 24 19:53 UTC |                     |
	|         | download-docker-213592                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-213592                                                                   | download-docker-213592 | jenkins | v1.32.0 | 03 Jan 24 19:53 UTC | 03 Jan 24 19:53 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-868924   | jenkins | v1.32.0 | 03 Jan 24 19:53 UTC |                     |
	|         | binary-mirror-868924                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45073                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-868924                                                                     | binary-mirror-868924   | jenkins | v1.32.0 | 03 Jan 24 19:53 UTC | 03 Jan 24 19:53 UTC |
	| addons  | disable dashboard -p                                                                        | addons-845596          | jenkins | v1.32.0 | 03 Jan 24 19:53 UTC |                     |
	|         | addons-845596                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-845596          | jenkins | v1.32.0 | 03 Jan 24 19:53 UTC |                     |
	|         | addons-845596                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-845596 --wait=true                                                                | addons-845596          | jenkins | v1.32.0 | 03 Jan 24 19:53 UTC | 03 Jan 24 19:56 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-845596          | jenkins | v1.32.0 | 03 Jan 24 19:56 UTC | 03 Jan 24 19:56 UTC |
	|         | -p addons-845596                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-845596 ip                                                                            | addons-845596          | jenkins | v1.32.0 | 03 Jan 24 19:56 UTC | 03 Jan 24 19:56 UTC |
	| addons  | addons-845596 addons disable                                                                | addons-845596          | jenkins | v1.32.0 | 03 Jan 24 19:56 UTC | 03 Jan 24 19:56 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-845596          | jenkins | v1.32.0 | 03 Jan 24 19:56 UTC | 03 Jan 24 19:56 UTC |
	|         | -p addons-845596                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-845596 ssh cat                                                                       | addons-845596          | jenkins | v1.32.0 | 03 Jan 24 19:56 UTC | 03 Jan 24 19:56 UTC |
	|         | /opt/local-path-provisioner/pvc-2ad071b6-0e4d-454a-9eea-120e6c6d57fe_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-845596 addons disable                                                                | addons-845596          | jenkins | v1.32.0 | 03 Jan 24 19:56 UTC | 03 Jan 24 19:57 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-845596          | jenkins | v1.32.0 | 03 Jan 24 19:56 UTC | 03 Jan 24 19:56 UTC |
	|         | addons-845596                                                                               |                        |         |         |                     |                     |
	| addons  | addons-845596 addons                                                                        | addons-845596          | jenkins | v1.32.0 | 03 Jan 24 19:57 UTC | 03 Jan 24 19:57 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-845596 addons                                                                        | addons-845596          | jenkins | v1.32.0 | 03 Jan 24 19:57 UTC | 03 Jan 24 19:57 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-845596          | jenkins | v1.32.0 | 03 Jan 24 19:57 UTC | 03 Jan 24 19:57 UTC |
	|         | addons-845596                                                                               |                        |         |         |                     |                     |
	| addons  | addons-845596 addons                                                                        | addons-845596          | jenkins | v1.32.0 | 03 Jan 24 19:57 UTC | 03 Jan 24 19:57 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-845596 ssh curl -s                                                                   | addons-845596          | jenkins | v1.32.0 | 03 Jan 24 19:57 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-845596 ip                                                                            | addons-845596          | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 19:59 UTC |
	| addons  | addons-845596 addons disable                                                                | addons-845596          | jenkins | v1.32.0 | 03 Jan 24 19:59 UTC | 03 Jan 24 20:00 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-845596 addons disable                                                                | addons-845596          | jenkins | v1.32.0 | 03 Jan 24 20:00 UTC | 03 Jan 24 20:00 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 19:53:08
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 19:53:08.480826  415354 out.go:296] Setting OutFile to fd 1 ...
	I0103 19:53:08.480956  415354 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:53:08.480966  415354 out.go:309] Setting ErrFile to fd 2...
	I0103 19:53:08.480972  415354 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:53:08.481239  415354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-409390/.minikube/bin
	I0103 19:53:08.481796  415354 out.go:303] Setting JSON to false
	I0103 19:53:08.482683  415354 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":5738,"bootTime":1704305851,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0103 19:53:08.482761  415354 start.go:138] virtualization:  
	I0103 19:53:08.484904  415354 out.go:177] * [addons-845596] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0103 19:53:08.487035  415354 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 19:53:08.488991  415354 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 19:53:08.487185  415354 notify.go:220] Checking for updates...
	I0103 19:53:08.492645  415354 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 19:53:08.494216  415354 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube
	I0103 19:53:08.495809  415354 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0103 19:53:08.497998  415354 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 19:53:08.500135  415354 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 19:53:08.524145  415354 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 19:53:08.524260  415354 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 19:53:08.602918  415354 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-03 19:53:08.592339394 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 19:53:08.603025  415354 docker.go:295] overlay module found
	I0103 19:53:08.604936  415354 out.go:177] * Using the docker driver based on user configuration
	I0103 19:53:08.606619  415354 start.go:298] selected driver: docker
	I0103 19:53:08.606637  415354 start.go:902] validating driver "docker" against <nil>
	I0103 19:53:08.606651  415354 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 19:53:08.607294  415354 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 19:53:08.674035  415354 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-03 19:53:08.664287681 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 19:53:08.674204  415354 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0103 19:53:08.674451  415354 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0103 19:53:08.676147  415354 out.go:177] * Using Docker driver with root privileges
	I0103 19:53:08.678225  415354 cni.go:84] Creating CNI manager for ""
	I0103 19:53:08.678244  415354 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 19:53:08.678256  415354 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0103 19:53:08.678278  415354 start_flags.go:323] config:
	{Name:addons-845596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-845596 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:53:08.682125  415354 out.go:177] * Starting control plane node addons-845596 in cluster addons-845596
	I0103 19:53:08.684399  415354 cache.go:121] Beginning downloading kic base image for docker with crio
	I0103 19:53:08.686229  415354 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0103 19:53:08.687788  415354 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 19:53:08.687840  415354 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0103 19:53:08.687861  415354 cache.go:56] Caching tarball of preloaded images
	I0103 19:53:08.687891  415354 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 19:53:08.687945  415354 preload.go:174] Found /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0103 19:53:08.687957  415354 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0103 19:53:08.688298  415354 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/config.json ...
	I0103 19:53:08.688436  415354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/config.json: {Name:mka55bfc5d92bf8b9645fff063cfeb4fef6021e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:53:08.709036  415354 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0103 19:53:08.709199  415354 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I0103 19:53:08.709221  415354 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory, skipping pull
	I0103 19:53:08.709225  415354 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in cache, skipping pull
	I0103 19:53:08.709234  415354 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I0103 19:53:08.709239  415354 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c from local cache
	I0103 19:53:24.750356  415354 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c from cached tarball
	I0103 19:53:24.750415  415354 cache.go:194] Successfully downloaded all kic artifacts
	I0103 19:53:24.750465  415354 start.go:365] acquiring machines lock for addons-845596: {Name:mk3530b569aec21fa04cf31a0a70921788dadb72 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 19:53:24.750613  415354 start.go:369] acquired machines lock for "addons-845596" in 123.166µs
	I0103 19:53:24.750644  415354 start.go:93] Provisioning new machine with config: &{Name:addons-845596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-845596 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 19:53:24.750723  415354 start.go:125] createHost starting for "" (driver="docker")
	I0103 19:53:24.752975  415354 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0103 19:53:24.753220  415354 start.go:159] libmachine.API.Create for "addons-845596" (driver="docker")
	I0103 19:53:24.753256  415354 client.go:168] LocalClient.Create starting
	I0103 19:53:24.753373  415354 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem
	I0103 19:53:26.270163  415354 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem
	I0103 19:53:26.579100  415354 cli_runner.go:164] Run: docker network inspect addons-845596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0103 19:53:26.596673  415354 cli_runner.go:211] docker network inspect addons-845596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0103 19:53:26.596764  415354 network_create.go:281] running [docker network inspect addons-845596] to gather additional debugging logs...
	I0103 19:53:26.596787  415354 cli_runner.go:164] Run: docker network inspect addons-845596
	W0103 19:53:26.614170  415354 cli_runner.go:211] docker network inspect addons-845596 returned with exit code 1
	I0103 19:53:26.614206  415354 network_create.go:284] error running [docker network inspect addons-845596]: docker network inspect addons-845596: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-845596 not found
	I0103 19:53:26.614219  415354 network_create.go:286] output of [docker network inspect addons-845596]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-845596 not found
	
	** /stderr **
	I0103 19:53:26.614325  415354 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0103 19:53:26.631939  415354 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400303e190}
	I0103 19:53:26.631981  415354 network_create.go:124] attempt to create docker network addons-845596 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0103 19:53:26.632046  415354 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-845596 addons-845596
	I0103 19:53:26.700050  415354 network_create.go:108] docker network addons-845596 192.168.49.0/24 created
	I0103 19:53:26.700080  415354 kic.go:121] calculated static IP "192.168.49.2" for the "addons-845596" container
	I0103 19:53:26.700151  415354 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0103 19:53:26.717509  415354 cli_runner.go:164] Run: docker volume create addons-845596 --label name.minikube.sigs.k8s.io=addons-845596 --label created_by.minikube.sigs.k8s.io=true
	I0103 19:53:26.736033  415354 oci.go:103] Successfully created a docker volume addons-845596
	I0103 19:53:26.736126  415354 cli_runner.go:164] Run: docker run --rm --name addons-845596-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-845596 --entrypoint /usr/bin/test -v addons-845596:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I0103 19:53:28.675537  415354 cli_runner.go:217] Completed: docker run --rm --name addons-845596-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-845596 --entrypoint /usr/bin/test -v addons-845596:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib: (1.939369959s)
	I0103 19:53:28.675570  415354 oci.go:107] Successfully prepared a docker volume addons-845596
	I0103 19:53:28.675600  415354 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 19:53:28.675626  415354 kic.go:194] Starting extracting preloaded images to volume ...
	I0103 19:53:28.675717  415354 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-845596:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I0103 19:53:33.028725  415354 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-845596:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (4.352966215s)
	I0103 19:53:33.028762  415354 kic.go:203] duration metric: took 4.353139 seconds to extract preloaded images to volume
	W0103 19:53:33.028923  415354 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0103 19:53:33.029042  415354 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0103 19:53:33.099479  415354 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-845596 --name addons-845596 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-845596 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-845596 --network addons-845596 --ip 192.168.49.2 --volume addons-845596:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0103 19:53:33.460969  415354 cli_runner.go:164] Run: docker container inspect addons-845596 --format={{.State.Running}}
	I0103 19:53:33.483751  415354 cli_runner.go:164] Run: docker container inspect addons-845596 --format={{.State.Status}}
	I0103 19:53:33.511093  415354 cli_runner.go:164] Run: docker exec addons-845596 stat /var/lib/dpkg/alternatives/iptables
	I0103 19:53:33.590895  415354 oci.go:144] the created container "addons-845596" has a running status.
	I0103 19:53:33.590929  415354 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/addons-845596/id_rsa...
	I0103 19:53:33.875292  415354 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17885-409390/.minikube/machines/addons-845596/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0103 19:53:33.912090  415354 cli_runner.go:164] Run: docker container inspect addons-845596 --format={{.State.Status}}
	I0103 19:53:33.937209  415354 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0103 19:53:33.937235  415354 kic_runner.go:114] Args: [docker exec --privileged addons-845596 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0103 19:53:34.008577  415354 cli_runner.go:164] Run: docker container inspect addons-845596 --format={{.State.Status}}
	I0103 19:53:34.036104  415354 machine.go:88] provisioning docker machine ...
	I0103 19:53:34.036137  415354 ubuntu.go:169] provisioning hostname "addons-845596"
	I0103 19:53:34.036212  415354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-845596
	I0103 19:53:34.074913  415354 main.go:141] libmachine: Using SSH client type: native
	I0103 19:53:34.075350  415354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0103 19:53:34.075372  415354 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-845596 && echo "addons-845596" | sudo tee /etc/hostname
	I0103 19:53:34.075957  415354 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59434->127.0.0.1:33103: read: connection reset by peer
	I0103 19:53:37.231637  415354 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-845596
	
	I0103 19:53:37.231798  415354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-845596
	I0103 19:53:37.256076  415354 main.go:141] libmachine: Using SSH client type: native
	I0103 19:53:37.256482  415354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0103 19:53:37.256505  415354 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-845596' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-845596/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-845596' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 19:53:37.399851  415354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 19:53:37.399886  415354 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17885-409390/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-409390/.minikube}
	I0103 19:53:37.399911  415354 ubuntu.go:177] setting up certificates
	I0103 19:53:37.399923  415354 provision.go:83] configureAuth start
	I0103 19:53:37.399983  415354 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-845596
	I0103 19:53:37.417742  415354 provision.go:138] copyHostCerts
	I0103 19:53:37.417827  415354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem (1078 bytes)
	I0103 19:53:37.417946  415354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem (1123 bytes)
	I0103 19:53:37.418006  415354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem (1679 bytes)
	I0103 19:53:37.418055  415354 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem org=jenkins.addons-845596 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-845596]
	I0103 19:53:38.508389  415354 provision.go:172] copyRemoteCerts
	I0103 19:53:38.508470  415354 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 19:53:38.508516  415354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-845596
	I0103 19:53:38.530740  415354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/addons-845596/id_rsa Username:docker}
	I0103 19:53:38.633060  415354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 19:53:38.662541  415354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 19:53:38.691479  415354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0103 19:53:38.720477  415354 provision.go:86] duration metric: configureAuth took 1.32054003s
	I0103 19:53:38.720502  415354 ubuntu.go:193] setting minikube options for container-runtime
	I0103 19:53:38.720685  415354 config.go:182] Loaded profile config "addons-845596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:53:38.720804  415354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-845596
	I0103 19:53:38.738973  415354 main.go:141] libmachine: Using SSH client type: native
	I0103 19:53:38.739392  415354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0103 19:53:38.739414  415354 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 19:53:38.990111  415354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 19:53:38.990132  415354 machine.go:91] provisioned docker machine in 4.954005918s
	I0103 19:53:38.990142  415354 client.go:171] LocalClient.Create took 14.236878609s
	I0103 19:53:38.990154  415354 start.go:167] duration metric: libmachine.API.Create for "addons-845596" took 14.236935429s
	I0103 19:53:38.990169  415354 start.go:300] post-start starting for "addons-845596" (driver="docker")
	I0103 19:53:38.990178  415354 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 19:53:38.990243  415354 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 19:53:38.990289  415354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-845596
	I0103 19:53:39.007906  415354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/addons-845596/id_rsa Username:docker}
	I0103 19:53:39.110322  415354 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 19:53:39.114710  415354 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0103 19:53:39.114752  415354 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0103 19:53:39.114764  415354 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0103 19:53:39.114772  415354 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0103 19:53:39.114784  415354 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-409390/.minikube/addons for local assets ...
	I0103 19:53:39.114862  415354 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-409390/.minikube/files for local assets ...
	I0103 19:53:39.114907  415354 start.go:303] post-start completed in 124.73224ms
	I0103 19:53:39.115239  415354 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-845596
	I0103 19:53:39.133978  415354 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/config.json ...
	I0103 19:53:39.134365  415354 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 19:53:39.134434  415354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-845596
	I0103 19:53:39.153308  415354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/addons-845596/id_rsa Username:docker}
	I0103 19:53:39.248798  415354 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0103 19:53:39.255081  415354 start.go:128] duration metric: createHost completed in 14.504337161s
	I0103 19:53:39.255157  415354 start.go:83] releasing machines lock for "addons-845596", held for 14.504530719s
	I0103 19:53:39.255243  415354 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-845596
	I0103 19:53:39.272828  415354 ssh_runner.go:195] Run: cat /version.json
	I0103 19:53:39.272880  415354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-845596
	I0103 19:53:39.272891  415354 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 19:53:39.273020  415354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-845596
	I0103 19:53:39.297824  415354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/addons-845596/id_rsa Username:docker}
	I0103 19:53:39.307953  415354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/addons-845596/id_rsa Username:docker}
	I0103 19:53:39.391039  415354 ssh_runner.go:195] Run: systemctl --version
	I0103 19:53:39.534980  415354 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 19:53:39.681375  415354 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 19:53:39.686850  415354 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 19:53:39.709272  415354 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0103 19:53:39.709347  415354 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 19:53:39.742837  415354 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0103 19:53:39.742858  415354 start.go:475] detecting cgroup driver to use...
	I0103 19:53:39.742888  415354 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 19:53:39.742937  415354 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 19:53:39.761227  415354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 19:53:39.774783  415354 docker.go:203] disabling cri-docker service (if available) ...
	I0103 19:53:39.774898  415354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 19:53:39.791514  415354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 19:53:39.808564  415354 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 19:53:39.897963  415354 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 19:53:40.004766  415354 docker.go:219] disabling docker service ...
	I0103 19:53:40.004880  415354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 19:53:40.049653  415354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 19:53:40.066127  415354 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 19:53:40.163290  415354 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 19:53:40.269425  415354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 19:53:40.283196  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 19:53:40.303247  415354 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 19:53:40.303356  415354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:53:40.316761  415354 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 19:53:40.316841  415354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:53:40.329246  415354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:53:40.341410  415354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 19:53:40.354169  415354 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 19:53:40.365550  415354 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 19:53:40.375978  415354 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 19:53:40.386445  415354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 19:53:40.474623  415354 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 19:53:40.608074  415354 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 19:53:40.608211  415354 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 19:53:40.613661  415354 start.go:543] Will wait 60s for crictl version
	I0103 19:53:40.613756  415354 ssh_runner.go:195] Run: which crictl
	I0103 19:53:40.618259  415354 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 19:53:40.659821  415354 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0103 19:53:40.659975  415354 ssh_runner.go:195] Run: crio --version
	I0103 19:53:40.707578  415354 ssh_runner.go:195] Run: crio --version
	I0103 19:53:40.756322  415354 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0103 19:53:40.758041  415354 cli_runner.go:164] Run: docker network inspect addons-845596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0103 19:53:40.775414  415354 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0103 19:53:40.780055  415354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 19:53:40.793760  415354 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 19:53:40.793842  415354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 19:53:40.871175  415354 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 19:53:40.871199  415354 crio.go:415] Images already preloaded, skipping extraction
	I0103 19:53:40.871255  415354 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 19:53:40.915228  415354 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 19:53:40.915253  415354 cache_images.go:84] Images are preloaded, skipping loading
	I0103 19:53:40.915328  415354 ssh_runner.go:195] Run: crio config
	I0103 19:53:40.982925  415354 cni.go:84] Creating CNI manager for ""
	I0103 19:53:40.982949  415354 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 19:53:40.983007  415354 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 19:53:40.983037  415354 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-845596 NodeName:addons-845596 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 19:53:40.983183  415354 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-845596"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 19:53:40.983264  415354 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-845596 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-845596 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 19:53:40.983331  415354 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 19:53:40.994134  415354 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 19:53:40.994231  415354 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 19:53:41.004729  415354 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0103 19:53:41.027678  415354 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 19:53:41.049368  415354 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0103 19:53:41.071799  415354 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0103 19:53:41.076323  415354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 19:53:41.089780  415354 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596 for IP: 192.168.49.2
	I0103 19:53:41.089813  415354 certs.go:190] acquiring lock for shared ca certs: {Name:mk7a87d13d39d2defe5d349d371b78fa1f1e95bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:53:41.089987  415354 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.key
	I0103 19:53:41.454651  415354 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt ...
	I0103 19:53:41.454681  415354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt: {Name:mk38f5caa862b5365bfe4420b21f3095453a1d38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:53:41.454875  415354 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-409390/.minikube/ca.key ...
	I0103 19:53:41.454889  415354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/ca.key: {Name:mk3f5127a06a06cc546c9fcdf79fed5850a053af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:53:41.454969  415354 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.key
	I0103 19:53:41.835735  415354 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.crt ...
	I0103 19:53:41.835766  415354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.crt: {Name:mkf1033916b55523e6c6aa1ff61bc59ca4b1572c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:53:41.835949  415354 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.key ...
	I0103 19:53:41.835960  415354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.key: {Name:mkd6f745f6af70cfae7153fb884c331c0ec2ea26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:53:41.836079  415354 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.key
	I0103 19:53:41.836095  415354 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt with IP's: []
	I0103 19:53:42.136062  415354 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt ...
	I0103 19:53:42.137322  415354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: {Name:mk7645b553fe2f19ad5baa28f5a5a2a77a92d867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:53:42.137604  415354 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.key ...
	I0103 19:53:42.148864  415354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.key: {Name:mkc572cd3933ceb60245146594bee6cf8ce8c8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:53:42.149044  415354 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/apiserver.key.dd3b5fb2
	I0103 19:53:42.149072  415354 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0103 19:53:42.653767  415354 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/apiserver.crt.dd3b5fb2 ...
	I0103 19:53:42.653798  415354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/apiserver.crt.dd3b5fb2: {Name:mka7f0c3ac3396f23b25667ed71de8cb514e3fe4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:53:42.653981  415354 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/apiserver.key.dd3b5fb2 ...
	I0103 19:53:42.653995  415354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/apiserver.key.dd3b5fb2: {Name:mk36746c6b82713430e16cd4060462095efc6a5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:53:42.654083  415354 certs.go:337] copying /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/apiserver.crt
	I0103 19:53:42.654155  415354 certs.go:341] copying /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/apiserver.key
	I0103 19:53:42.654210  415354 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/proxy-client.key
	I0103 19:53:42.654232  415354 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/proxy-client.crt with IP's: []
	I0103 19:53:43.536426  415354 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/proxy-client.crt ...
	I0103 19:53:43.536462  415354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/proxy-client.crt: {Name:mk8550e02b2c6de0961d7fca7b7013cb057a859e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:53:43.536651  415354 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/proxy-client.key ...
	I0103 19:53:43.536665  415354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/proxy-client.key: {Name:mk2ed5be6f5a71860931b7fd47f7ab7fcf317148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:53:43.536882  415354 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem (1679 bytes)
	I0103 19:53:43.536927  415354 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem (1078 bytes)
	I0103 19:53:43.536957  415354 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem (1123 bytes)
	I0103 19:53:43.536985  415354 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem (1679 bytes)
	I0103 19:53:43.537646  415354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 19:53:43.567850  415354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 19:53:43.597876  415354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 19:53:43.626696  415354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 19:53:43.656135  415354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 19:53:43.685906  415354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 19:53:43.715297  415354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 19:53:43.744777  415354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 19:53:43.773641  415354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 19:53:43.802125  415354 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 19:53:43.823825  415354 ssh_runner.go:195] Run: openssl version
	I0103 19:53:43.830815  415354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 19:53:43.842281  415354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:53:43.846992  415354 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:53:43.847056  415354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 19:53:43.855627  415354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 19:53:43.867197  415354 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 19:53:43.871742  415354 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 19:53:43.871793  415354 kubeadm.go:404] StartCluster: {Name:addons-845596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-845596 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:53:43.871898  415354 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 19:53:43.871970  415354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 19:53:43.922206  415354 cri.go:89] found id: ""
	I0103 19:53:43.922294  415354 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 19:53:43.933325  415354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 19:53:43.943987  415354 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0103 19:53:43.944067  415354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 19:53:43.954364  415354 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 19:53:43.954406  415354 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0103 19:53:44.061089  415354 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0103 19:53:44.142946  415354 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 19:53:58.906236  415354 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0103 19:53:58.906291  415354 kubeadm.go:322] [preflight] Running pre-flight checks
	I0103 19:53:58.906373  415354 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0103 19:53:58.906425  415354 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0103 19:53:58.906458  415354 kubeadm.go:322] OS: Linux
	I0103 19:53:58.906501  415354 kubeadm.go:322] CGROUPS_CPU: enabled
	I0103 19:53:58.906565  415354 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0103 19:53:58.906610  415354 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0103 19:53:58.906655  415354 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0103 19:53:58.906701  415354 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0103 19:53:58.906751  415354 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0103 19:53:58.906794  415354 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0103 19:53:58.906840  415354 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0103 19:53:58.906884  415354 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0103 19:53:58.906951  415354 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 19:53:58.907039  415354 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 19:53:58.907125  415354 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 19:53:58.907183  415354 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 19:53:58.908977  415354 out.go:204]   - Generating certificates and keys ...
	I0103 19:53:58.909070  415354 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0103 19:53:58.909133  415354 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0103 19:53:58.909195  415354 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0103 19:53:58.909248  415354 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0103 19:53:58.909304  415354 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0103 19:53:58.909351  415354 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0103 19:53:58.909401  415354 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0103 19:53:58.909510  415354 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-845596 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0103 19:53:58.909560  415354 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0103 19:53:58.909666  415354 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-845596 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0103 19:53:58.909727  415354 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0103 19:53:58.909786  415354 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0103 19:53:58.909828  415354 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0103 19:53:58.909899  415354 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 19:53:58.909947  415354 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 19:53:58.909997  415354 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 19:53:58.910058  415354 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 19:53:58.910109  415354 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 19:53:58.910185  415354 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 19:53:58.910246  415354 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 19:53:58.913545  415354 out.go:204]   - Booting up control plane ...
	I0103 19:53:58.913736  415354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 19:53:58.913867  415354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 19:53:58.913968  415354 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 19:53:58.914123  415354 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 19:53:58.914219  415354 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 19:53:58.914260  415354 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0103 19:53:58.914414  415354 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0103 19:53:58.914491  415354 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502116 seconds
	I0103 19:53:58.914608  415354 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0103 19:53:58.914734  415354 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0103 19:53:58.914793  415354 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0103 19:53:58.914972  415354 kubeadm.go:322] [mark-control-plane] Marking the node addons-845596 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0103 19:53:58.915028  415354 kubeadm.go:322] [bootstrap-token] Using token: w9k3nf.ec2a719ick2tugcw
	I0103 19:53:58.916887  415354 out.go:204]   - Configuring RBAC rules ...
	I0103 19:53:58.917018  415354 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0103 19:53:58.917103  415354 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0103 19:53:58.917243  415354 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0103 19:53:58.917368  415354 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0103 19:53:58.917482  415354 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0103 19:53:58.917580  415354 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0103 19:53:58.917703  415354 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0103 19:53:58.917748  415354 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0103 19:53:58.917794  415354 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0103 19:53:58.917799  415354 kubeadm.go:322] 
	I0103 19:53:58.917863  415354 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0103 19:53:58.917868  415354 kubeadm.go:322] 
	I0103 19:53:58.917944  415354 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0103 19:53:58.917948  415354 kubeadm.go:322] 
	I0103 19:53:58.917974  415354 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0103 19:53:58.918032  415354 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0103 19:53:58.918083  415354 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0103 19:53:58.918087  415354 kubeadm.go:322] 
	I0103 19:53:58.918145  415354 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0103 19:53:58.918150  415354 kubeadm.go:322] 
	I0103 19:53:58.918197  415354 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0103 19:53:58.918202  415354 kubeadm.go:322] 
	I0103 19:53:58.918254  415354 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0103 19:53:58.918328  415354 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0103 19:53:58.918395  415354 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0103 19:53:58.918399  415354 kubeadm.go:322] 
	I0103 19:53:58.918483  415354 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0103 19:53:58.918693  415354 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0103 19:53:58.918726  415354 kubeadm.go:322] 
	I0103 19:53:58.918975  415354 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token w9k3nf.ec2a719ick2tugcw \
	I0103 19:53:58.919089  415354 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:497121a93982227783f08bfd7c063fc9f8a8d85d0f5f85a6107fb52aadafec60 \
	I0103 19:53:58.919111  415354 kubeadm.go:322] 	--control-plane 
	I0103 19:53:58.919115  415354 kubeadm.go:322] 
	I0103 19:53:58.919206  415354 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0103 19:53:58.919211  415354 kubeadm.go:322] 
	I0103 19:53:58.919293  415354 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token w9k3nf.ec2a719ick2tugcw \
	I0103 19:53:58.919407  415354 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:497121a93982227783f08bfd7c063fc9f8a8d85d0f5f85a6107fb52aadafec60 
	I0103 19:53:58.919414  415354 cni.go:84] Creating CNI manager for ""
	I0103 19:53:58.919421  415354 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 19:53:58.922472  415354 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0103 19:53:58.924108  415354 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0103 19:53:58.939125  415354 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0103 19:53:58.939143  415354 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0103 19:53:58.971023  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0103 19:53:59.897508  415354 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 19:53:59.897646  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:53:59.897743  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a minikube.k8s.io/name=addons-845596 minikube.k8s.io/updated_at=2024_01_03T19_53_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:53:59.916370  415354 ops.go:34] apiserver oom_adj: -16
	I0103 19:54:00.247095  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:54:00.747719  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:54:01.247904  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:54:01.747710  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:54:02.247356  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:54:02.747212  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:54:03.247225  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:54:03.747208  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:54:04.247499  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:54:04.747888  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:54:05.247250  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:54:05.747304  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:54:06.247755  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:54:06.747252  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:54:07.247766  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:54:07.747219  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:54:08.247767  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:54:08.747335  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:54:09.247680  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:54:09.747613  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:54:10.248172  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:54:10.747546  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:54:11.248181  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:54:11.747189  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:54:12.247714  415354 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 19:54:12.351002  415354 kubeadm.go:1088] duration metric: took 12.453397017s to wait for elevateKubeSystemPrivileges.
	I0103 19:54:12.351034  415354 kubeadm.go:406] StartCluster complete in 28.479243335s
	I0103 19:54:12.351050  415354 settings.go:142] acquiring lock: {Name:mk35e0b2d8071191a72193c66ba9549131012420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:54:12.351159  415354 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 19:54:12.351592  415354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/kubeconfig: {Name:mkcf9b222e1b36afc1c2e4e412234b0c105c9bc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 19:54:12.353706  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 19:54:12.353968  415354 config.go:182] Loaded profile config "addons-845596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:54:12.354022  415354 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0103 19:54:12.354105  415354 addons.go:69] Setting yakd=true in profile "addons-845596"
	I0103 19:54:12.354131  415354 addons.go:237] Setting addon yakd=true in "addons-845596"
	I0103 19:54:12.354174  415354 host.go:66] Checking if "addons-845596" exists ...
	I0103 19:54:12.354712  415354 cli_runner.go:164] Run: docker container inspect addons-845596 --format={{.State.Status}}
	I0103 19:54:12.355070  415354 addons.go:69] Setting inspektor-gadget=true in profile "addons-845596"
	I0103 19:54:12.355087  415354 addons.go:237] Setting addon inspektor-gadget=true in "addons-845596"
	I0103 19:54:12.355120  415354 host.go:66] Checking if "addons-845596" exists ...
	I0103 19:54:12.355535  415354 cli_runner.go:164] Run: docker container inspect addons-845596 --format={{.State.Status}}
	I0103 19:54:12.355770  415354 addons.go:69] Setting metrics-server=true in profile "addons-845596"
	I0103 19:54:12.355796  415354 addons.go:237] Setting addon metrics-server=true in "addons-845596"
	I0103 19:54:12.355837  415354 host.go:66] Checking if "addons-845596" exists ...
	I0103 19:54:12.356232  415354 cli_runner.go:164] Run: docker container inspect addons-845596 --format={{.State.Status}}
	I0103 19:54:12.356376  415354 addons.go:69] Setting cloud-spanner=true in profile "addons-845596"
	I0103 19:54:12.356397  415354 addons.go:237] Setting addon cloud-spanner=true in "addons-845596"
	I0103 19:54:12.356432  415354 host.go:66] Checking if "addons-845596" exists ...
	I0103 19:54:12.356821  415354 cli_runner.go:164] Run: docker container inspect addons-845596 --format={{.State.Status}}
	I0103 19:54:12.357329  415354 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-845596"
	I0103 19:54:12.357352  415354 addons.go:237] Setting addon nvidia-device-plugin=true in "addons-845596"
	I0103 19:54:12.357402  415354 host.go:66] Checking if "addons-845596" exists ...
	I0103 19:54:12.357848  415354 cli_runner.go:164] Run: docker container inspect addons-845596 --format={{.State.Status}}
	I0103 19:54:12.367728  415354 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-845596"
	I0103 19:54:12.372713  415354 addons.go:237] Setting addon csi-hostpath-driver=true in "addons-845596"
	I0103 19:54:12.372815  415354 host.go:66] Checking if "addons-845596" exists ...
	I0103 19:54:12.386482  415354 cli_runner.go:164] Run: docker container inspect addons-845596 --format={{.State.Status}}
	I0103 19:54:12.368838  415354 addons.go:69] Setting registry=true in profile "addons-845596"
	I0103 19:54:12.414692  415354 addons.go:237] Setting addon registry=true in "addons-845596"
	I0103 19:54:12.414791  415354 host.go:66] Checking if "addons-845596" exists ...
	I0103 19:54:12.416319  415354 cli_runner.go:164] Run: docker container inspect addons-845596 --format={{.State.Status}}
	I0103 19:54:12.368852  415354 addons.go:69] Setting storage-provisioner=true in profile "addons-845596"
	I0103 19:54:12.428703  415354 addons.go:237] Setting addon storage-provisioner=true in "addons-845596"
	I0103 19:54:12.428801  415354 host.go:66] Checking if "addons-845596" exists ...
	I0103 19:54:12.429376  415354 cli_runner.go:164] Run: docker container inspect addons-845596 --format={{.State.Status}}
	I0103 19:54:12.368863  415354 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-845596"
	I0103 19:54:12.451572  415354 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-845596"
	I0103 19:54:12.451981  415354 cli_runner.go:164] Run: docker container inspect addons-845596 --format={{.State.Status}}
	I0103 19:54:12.368881  415354 addons.go:69] Setting volumesnapshots=true in profile "addons-845596"
	I0103 19:54:12.500315  415354 addons.go:237] Setting addon volumesnapshots=true in "addons-845596"
	I0103 19:54:12.500387  415354 host.go:66] Checking if "addons-845596" exists ...
	I0103 19:54:12.500928  415354 cli_runner.go:164] Run: docker container inspect addons-845596 --format={{.State.Status}}
	I0103 19:54:12.372609  415354 addons.go:69] Setting default-storageclass=true in profile "addons-845596"
	I0103 19:54:12.520270  415354 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-845596"
	I0103 19:54:12.520668  415354 cli_runner.go:164] Run: docker container inspect addons-845596 --format={{.State.Status}}
	I0103 19:54:12.532312  415354 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0103 19:54:12.372630  415354 addons.go:69] Setting gcp-auth=true in profile "addons-845596"
	I0103 19:54:12.372649  415354 addons.go:69] Setting ingress=true in profile "addons-845596"
	I0103 19:54:12.372658  415354 addons.go:69] Setting ingress-dns=true in profile "addons-845596"
	I0103 19:54:12.576813  415354 addons.go:237] Setting addon ingress-dns=true in "addons-845596"
	I0103 19:54:12.576945  415354 host.go:66] Checking if "addons-845596" exists ...
	I0103 19:54:12.577534  415354 cli_runner.go:164] Run: docker container inspect addons-845596 --format={{.State.Status}}
	I0103 19:54:12.607184  415354 addons.go:429] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0103 19:54:12.607213  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0103 19:54:12.607279  415354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-845596
	I0103 19:54:12.614815  415354 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0103 19:54:12.620621  415354 addons.go:429] installing /etc/kubernetes/addons/deployment.yaml
	I0103 19:54:12.620645  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0103 19:54:12.620724  415354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-845596
	I0103 19:54:12.625075  415354 mustload.go:65] Loading cluster: addons-845596
	I0103 19:54:12.625242  415354 addons.go:237] Setting addon ingress=true in "addons-845596"
	I0103 19:54:12.625343  415354 host.go:66] Checking if "addons-845596" exists ...
	I0103 19:54:12.636039  415354 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0103 19:54:12.631272  415354 cli_runner.go:164] Run: docker container inspect addons-845596 --format={{.State.Status}}
	I0103 19:54:12.631995  415354 config.go:182] Loaded profile config "addons-845596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 19:54:12.663398  415354 out.go:177]   - Using image docker.io/registry:2.8.3
	I0103 19:54:12.645358  415354 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0103 19:54:12.645371  415354 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.24.0
	I0103 19:54:12.645376  415354 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0103 19:54:12.645388  415354 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0103 19:54:12.662953  415354 cli_runner.go:164] Run: docker container inspect addons-845596 --format={{.State.Status}}
	I0103 19:54:12.663388  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0103 19:54:12.669667  415354 addons.go:237] Setting addon storage-provisioner-rancher=true in "addons-845596"
	I0103 19:54:12.670077  415354 host.go:66] Checking if "addons-845596" exists ...
	I0103 19:54:12.670668  415354 cli_runner.go:164] Run: docker container inspect addons-845596 --format={{.State.Status}}
	I0103 19:54:12.706756  415354 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0103 19:54:12.710773  415354 addons.go:429] installing /etc/kubernetes/addons/registry-rc.yaml
	I0103 19:54:12.710832  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0103 19:54:12.710931  415354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-845596
	I0103 19:54:12.679155  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0103 19:54:12.728929  415354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-845596
	I0103 19:54:12.731501  415354 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 19:54:12.757069  415354 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 19:54:12.757094  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 19:54:12.757162  415354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-845596
	I0103 19:54:12.759766  415354 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0103 19:54:12.764003  415354 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0103 19:54:12.764066  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0103 19:54:12.764168  415354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-845596
	I0103 19:54:12.779329  415354 addons.go:237] Setting addon default-storageclass=true in "addons-845596"
	I0103 19:54:12.779370  415354 host.go:66] Checking if "addons-845596" exists ...
	I0103 19:54:12.779842  415354 cli_runner.go:164] Run: docker container inspect addons-845596 --format={{.State.Status}}
	I0103 19:54:12.788290  415354 addons.go:429] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0103 19:54:12.788322  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0103 19:54:12.788392  415354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-845596
	I0103 19:54:12.800328  415354 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0103 19:54:12.795835  415354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/addons-845596/id_rsa Username:docker}
	I0103 19:54:12.795939  415354 addons.go:429] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0103 19:54:12.887516  415354 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0103 19:54:12.896877  415354 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0103 19:54:12.828534  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0103 19:54:12.828649  415354 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0103 19:54:12.903714  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0103 19:54:12.903737  415354 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0103 19:54:12.903823  415354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-845596
	I0103 19:54:12.903868  415354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-845596
	I0103 19:54:12.905760  415354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/addons-845596/id_rsa Username:docker}
	I0103 19:54:12.925656  415354 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0103 19:54:12.929085  415354 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0103 19:54:12.931574  415354 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0103 19:54:12.931646  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0103 19:54:12.931794  415354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-845596
	I0103 19:54:12.942270  415354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/addons-845596/id_rsa Username:docker}
	I0103 19:54:12.957397  415354 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0103 19:54:12.931168  415354 host.go:66] Checking if "addons-845596" exists ...
	I0103 19:54:12.957490  415354 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0103 19:54:12.964607  415354 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0103 19:54:12.970785  415354 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0103 19:54:12.977200  415354 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0103 19:54:12.972416  415354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/addons-845596/id_rsa Username:docker}
	I0103 19:54:12.972459  415354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/addons-845596/id_rsa Username:docker}
	I0103 19:54:12.984582  415354 out.go:177]   - Using image docker.io/busybox:stable
	I0103 19:54:12.986349  415354 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0103 19:54:12.986480  415354 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0103 19:54:12.987961  415354 addons.go:429] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0103 19:54:12.987977  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0103 19:54:12.987976  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0103 19:54:12.988048  415354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-845596
	I0103 19:54:12.988079  415354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-845596
	I0103 19:54:13.010689  415354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/addons-845596/id_rsa Username:docker}
	I0103 19:54:13.021036  415354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/addons-845596/id_rsa Username:docker}
	I0103 19:54:13.049627  415354 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 19:54:13.049648  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 19:54:13.049713  415354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-845596
	I0103 19:54:13.127286  415354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/addons-845596/id_rsa Username:docker}
	I0103 19:54:13.143428  415354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/addons-845596/id_rsa Username:docker}
	I0103 19:54:13.150341  415354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/addons-845596/id_rsa Username:docker}
	I0103 19:54:13.160929  415354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/addons-845596/id_rsa Username:docker}
	I0103 19:54:13.206332  415354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/addons-845596/id_rsa Username:docker}
	I0103 19:54:13.210150  415354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/addons-845596/id_rsa Username:docker}
	W0103 19:54:13.211751  415354 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0103 19:54:13.211781  415354 retry.go:31] will retry after 321.283846ms: ssh: handshake failed: EOF
	I0103 19:54:13.349454  415354 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0103 19:54:13.349477  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0103 19:54:13.403815  415354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 19:54:13.428408  415354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0103 19:54:13.440018  415354 addons.go:429] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0103 19:54:13.440080  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0103 19:54:13.455391  415354 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-845596" context rescaled to 1 replicas
	I0103 19:54:13.455475  415354 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 19:54:13.458106  415354 out.go:177] * Verifying Kubernetes components...
	I0103 19:54:13.459768  415354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:54:13.464659  415354 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0103 19:54:13.464679  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0103 19:54:13.467571  415354 addons.go:429] installing /etc/kubernetes/addons/registry-svc.yaml
	I0103 19:54:13.467591  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0103 19:54:13.532950  415354 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0103 19:54:13.533022  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0103 19:54:13.545193  415354 addons.go:429] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0103 19:54:13.545264  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0103 19:54:13.591487  415354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0103 19:54:13.607272  415354 addons.go:429] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0103 19:54:13.607341  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0103 19:54:13.645676  415354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0103 19:54:13.649794  415354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0103 19:54:13.658614  415354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0103 19:54:13.673867  415354 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0103 19:54:13.673893  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0103 19:54:13.680656  415354 addons.go:429] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0103 19:54:13.680725  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0103 19:54:13.724674  415354 addons.go:429] installing /etc/kubernetes/addons/ig-role.yaml
	I0103 19:54:13.724700  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0103 19:54:13.738896  415354 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0103 19:54:13.738923  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0103 19:54:13.753869  415354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 19:54:13.784552  415354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0103 19:54:13.845154  415354 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 19:54:13.845222  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0103 19:54:13.861633  415354 addons.go:429] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0103 19:54:13.861703  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0103 19:54:13.975041  415354 addons.go:429] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0103 19:54:13.975104  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0103 19:54:13.978407  415354 addons.go:429] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0103 19:54:13.978475  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0103 19:54:14.060862  415354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0103 19:54:14.075485  415354 addons.go:429] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0103 19:54:14.075554  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0103 19:54:14.110756  415354 addons.go:429] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0103 19:54:14.110829  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0103 19:54:14.165837  415354 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0103 19:54:14.165908  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0103 19:54:14.206859  415354 addons.go:429] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0103 19:54:14.206926  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0103 19:54:14.310267  415354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0103 19:54:14.363035  415354 addons.go:429] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0103 19:54:14.363105  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0103 19:54:14.379458  415354 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0103 19:54:14.379527  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0103 19:54:14.401326  415354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0103 19:54:14.521461  415354 addons.go:429] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0103 19:54:14.521534  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0103 19:54:14.568281  415354 addons.go:429] installing /etc/kubernetes/addons/ig-crd.yaml
	I0103 19:54:14.568353  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0103 19:54:14.667013  415354 addons.go:429] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0103 19:54:14.667082  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0103 19:54:14.712776  415354 addons.go:429] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0103 19:54:14.712841  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0103 19:54:14.809970  415354 addons.go:429] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0103 19:54:14.810042  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0103 19:54:14.908486  415354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0103 19:54:15.061137  415354 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0103 19:54:15.061219  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0103 19:54:15.219689  415354 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0103 19:54:15.219762  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0103 19:54:15.388588  415354 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.67865225s)
	I0103 19:54:15.388667  415354 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0103 19:54:15.486337  415354 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0103 19:54:15.486407  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0103 19:54:15.732985  415354 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0103 19:54:15.733053  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0103 19:54:15.869250  415354 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0103 19:54:15.869323  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0103 19:54:16.068097  415354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0103 19:54:18.098180  415354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.669698912s)
	I0103 19:54:18.098210  415354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.694328119s)
	I0103 19:54:18.098233  415354 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (4.638415395s)
	I0103 19:54:18.099355  415354 node_ready.go:35] waiting up to 6m0s for node "addons-845596" to be "Ready" ...
	I0103 19:54:18.261644  415354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.670066781s)
	I0103 19:54:18.261914  415354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.616170687s)
	I0103 19:54:18.261997  415354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.612179159s)
	I0103 19:54:19.001487  415354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.342833863s)
	I0103 19:54:19.001521  415354 addons.go:473] Verifying addon ingress=true in "addons-845596"
	I0103 19:54:19.003163  415354 out.go:177] * Verifying ingress addon...
	I0103 19:54:19.001688  415354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.247747737s)
	I0103 19:54:19.001727  415354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.217094174s)
	I0103 19:54:19.001782  415354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.940839159s)
	I0103 19:54:19.001814  415354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.691479424s)
	I0103 19:54:19.001901  415354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.600505431s)
	I0103 19:54:19.001956  415354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.093400756s)
	I0103 19:54:19.005631  415354 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0103 19:54:19.005928  415354 addons.go:473] Verifying addon registry=true in "addons-845596"
	I0103 19:54:19.010629  415354 out.go:177] * Verifying registry addon...
	I0103 19:54:19.006141  415354 addons.go:473] Verifying addon metrics-server=true in "addons-845596"
	W0103 19:54:19.006165  415354 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0103 19:54:19.013229  415354 retry.go:31] will retry after 320.547721ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0103 19:54:19.014182  415354 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0103 19:54:19.014274  415354 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-845596 service yakd-dashboard -n yakd-dashboard
	
	
	I0103 19:54:19.028286  415354 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0103 19:54:19.028316  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:19.029312  415354 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0103 19:54:19.029334  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:19.330959  415354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.262769899s)
	I0103 19:54:19.330993  415354 addons.go:473] Verifying addon csi-hostpath-driver=true in "addons-845596"
	I0103 19:54:19.333148  415354 out.go:177] * Verifying csi-hostpath-driver addon...
	I0103 19:54:19.335779  415354 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0103 19:54:19.334416  415354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0103 19:54:19.377797  415354 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0103 19:54:19.377822  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:19.512144  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:19.525193  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:19.856578  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:20.014635  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:20.028273  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:20.104586  415354 node_ready.go:58] node "addons-845596" has status "Ready":"False"
	I0103 19:54:20.344530  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:20.510247  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:20.519765  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:20.813922  415354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.477881927s)
	I0103 19:54:20.845076  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:21.016334  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:21.019300  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:21.163296  415354 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0103 19:54:21.163412  415354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-845596
	I0103 19:54:21.199689  415354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/addons-845596/id_rsa Username:docker}
	I0103 19:54:21.343340  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:21.440926  415354 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0103 19:54:21.502098  415354 addons.go:237] Setting addon gcp-auth=true in "addons-845596"
	I0103 19:54:21.502176  415354 host.go:66] Checking if "addons-845596" exists ...
	I0103 19:54:21.502751  415354 cli_runner.go:164] Run: docker container inspect addons-845596 --format={{.State.Status}}
	I0103 19:54:21.517094  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:21.528403  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:21.541896  415354 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0103 19:54:21.542000  415354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-845596
	I0103 19:54:21.579700  415354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/addons-845596/id_rsa Username:docker}
	I0103 19:54:21.704131  415354 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0103 19:54:21.705924  415354 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0103 19:54:21.707755  415354 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0103 19:54:21.707786  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0103 19:54:21.764201  415354 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0103 19:54:21.764228  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0103 19:54:21.787803  415354 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0103 19:54:21.787838  415354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0103 19:54:21.813148  415354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0103 19:54:21.841623  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:22.020573  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:22.032875  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:22.112339  415354 node_ready.go:58] node "addons-845596" has status "Ready":"False"
	I0103 19:54:22.347860  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:22.512499  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:22.520885  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:22.842669  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:23.068165  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:23.076013  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:23.105036  415354 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.291828393s)
	I0103 19:54:23.107941  415354 addons.go:473] Verifying addon gcp-auth=true in "addons-845596"
	I0103 19:54:23.109787  415354 out.go:177] * Verifying gcp-auth addon...
	I0103 19:54:23.112359  415354 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0103 19:54:23.126502  415354 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0103 19:54:23.126533  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:23.341075  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:23.512043  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:23.524852  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:23.616775  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:23.840274  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:24.011537  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:24.022973  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:24.116072  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:24.340430  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:24.510407  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:24.519041  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:24.603531  415354 node_ready.go:58] node "addons-845596" has status "Ready":"False"
	I0103 19:54:24.617028  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:24.840770  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:25.018608  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:25.026540  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:25.116875  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:25.340678  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:25.510575  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:25.518900  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:25.616876  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:25.841458  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:26.009759  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:26.019268  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:26.115969  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:26.340342  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:26.510632  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:26.518487  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:26.616293  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:26.841044  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:27.012496  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:27.019805  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:27.103335  415354 node_ready.go:58] node "addons-845596" has status "Ready":"False"
	I0103 19:54:27.116029  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:27.341023  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:27.510709  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:27.518665  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:27.616384  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:27.841318  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:28.010214  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:28.026023  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:28.116548  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:28.341033  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:28.510758  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:28.519050  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:28.615926  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:28.840908  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:29.010666  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:29.018990  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:29.131828  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:29.340267  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:29.510073  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:29.518356  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:29.603516  415354 node_ready.go:58] node "addons-845596" has status "Ready":"False"
	I0103 19:54:29.616376  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:29.840535  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:30.009862  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:30.024034  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:30.116825  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:30.340377  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:30.510064  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:30.518965  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:30.616624  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:30.840558  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:31.015439  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:31.019305  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:31.116720  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:31.340720  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:31.511158  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:31.518233  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:31.616318  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:31.840967  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:32.010497  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:32.020023  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:32.103282  415354 node_ready.go:58] node "addons-845596" has status "Ready":"False"
	I0103 19:54:32.116899  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:32.340683  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:32.510251  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:32.518713  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:32.620112  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:32.840326  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:33.010154  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:33.019815  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:33.117170  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:33.341042  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:33.510358  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:33.518879  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:33.616043  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:33.840614  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:34.014060  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:34.025265  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:34.103656  415354 node_ready.go:58] node "addons-845596" has status "Ready":"False"
	I0103 19:54:34.116694  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:34.340142  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:34.510230  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:34.518753  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:34.616071  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:34.840406  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:35.010499  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:35.020283  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:35.116620  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:35.341324  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:35.510573  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:35.518910  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:35.617080  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:35.840161  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:36.011232  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:36.020695  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:36.116173  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:36.341339  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:36.510274  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:36.518794  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:36.602776  415354 node_ready.go:58] node "addons-845596" has status "Ready":"False"
	I0103 19:54:36.616929  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:36.841075  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:37.009909  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:37.020058  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:37.116970  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:37.340108  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:37.510678  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:37.520856  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:37.615890  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:37.841032  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:38.022216  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:38.025559  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:38.116630  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:38.340839  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:38.510650  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:38.519814  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:38.603317  415354 node_ready.go:58] node "addons-845596" has status "Ready":"False"
	I0103 19:54:38.616047  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:38.840631  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:39.010304  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:39.018990  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:39.117108  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:39.340185  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:39.509634  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:39.518701  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:39.616869  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:39.840598  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:40.009928  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:40.022862  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:40.117183  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:40.341094  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:40.511047  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:40.518104  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:40.603510  415354 node_ready.go:58] node "addons-845596" has status "Ready":"False"
	I0103 19:54:40.616181  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:40.840674  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:41.011563  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:41.019819  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:41.116438  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:41.340593  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:41.510293  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:41.518267  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:41.615821  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:41.840243  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:42.010862  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:42.020726  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:42.141577  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:42.341735  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:42.516581  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:42.525302  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:42.603605  415354 node_ready.go:58] node "addons-845596" has status "Ready":"False"
	I0103 19:54:42.616669  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:42.840874  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:43.010472  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:43.019281  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:43.115807  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:43.340319  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:43.509888  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:43.519150  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:43.616820  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:43.840875  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:44.010234  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:44.019258  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:44.116190  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:44.340837  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:44.510268  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:44.518371  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:44.616267  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:44.845730  415354 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0103 19:54:44.845763  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:45.061658  415354 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0103 19:54:45.061689  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:45.061990  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:45.133517  415354 node_ready.go:49] node "addons-845596" has status "Ready":"True"
	I0103 19:54:45.133562  415354 node_ready.go:38] duration metric: took 27.034136752s waiting for node "addons-845596" to be "Ready" ...
	I0103 19:54:45.133576  415354 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:54:45.207974  415354 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kr7hh" in "kube-system" namespace to be "Ready" ...
	I0103 19:54:45.211608  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:45.368133  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:45.536253  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:45.541439  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:45.626727  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:45.870654  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:46.011984  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:46.023253  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:46.116302  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:46.342064  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:46.510634  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:46.519248  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:46.619184  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:46.737994  415354 pod_ready.go:92] pod "coredns-5dd5756b68-kr7hh" in "kube-system" namespace has status "Ready":"True"
	I0103 19:54:46.738020  415354 pod_ready.go:81] duration metric: took 1.530004694s waiting for pod "coredns-5dd5756b68-kr7hh" in "kube-system" namespace to be "Ready" ...
	I0103 19:54:46.738044  415354 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-845596" in "kube-system" namespace to be "Ready" ...
	I0103 19:54:46.764948  415354 pod_ready.go:92] pod "etcd-addons-845596" in "kube-system" namespace has status "Ready":"True"
	I0103 19:54:46.764974  415354 pod_ready.go:81] duration metric: took 26.922088ms waiting for pod "etcd-addons-845596" in "kube-system" namespace to be "Ready" ...
	I0103 19:54:46.764989  415354 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-845596" in "kube-system" namespace to be "Ready" ...
	I0103 19:54:46.782012  415354 pod_ready.go:92] pod "kube-apiserver-addons-845596" in "kube-system" namespace has status "Ready":"True"
	I0103 19:54:46.782039  415354 pod_ready.go:81] duration metric: took 17.03212ms waiting for pod "kube-apiserver-addons-845596" in "kube-system" namespace to be "Ready" ...
	I0103 19:54:46.782051  415354 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-845596" in "kube-system" namespace to be "Ready" ...
	I0103 19:54:46.790384  415354 pod_ready.go:92] pod "kube-controller-manager-addons-845596" in "kube-system" namespace has status "Ready":"True"
	I0103 19:54:46.790411  415354 pod_ready.go:81] duration metric: took 8.352122ms waiting for pod "kube-controller-manager-addons-845596" in "kube-system" namespace to be "Ready" ...
	I0103 19:54:46.790426  415354 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l9r8j" in "kube-system" namespace to be "Ready" ...
	I0103 19:54:46.797339  415354 pod_ready.go:92] pod "kube-proxy-l9r8j" in "kube-system" namespace has status "Ready":"True"
	I0103 19:54:46.797366  415354 pod_ready.go:81] duration metric: took 6.931903ms waiting for pod "kube-proxy-l9r8j" in "kube-system" namespace to be "Ready" ...
	I0103 19:54:46.797378  415354 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-845596" in "kube-system" namespace to be "Ready" ...
	I0103 19:54:46.842770  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:47.012251  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:47.021436  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:47.115717  415354 pod_ready.go:92] pod "kube-scheduler-addons-845596" in "kube-system" namespace has status "Ready":"True"
	I0103 19:54:47.115789  415354 pod_ready.go:81] duration metric: took 318.402005ms waiting for pod "kube-scheduler-addons-845596" in "kube-system" namespace to be "Ready" ...
	I0103 19:54:47.115816  415354 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-rhh5h" in "kube-system" namespace to be "Ready" ...
	I0103 19:54:47.117154  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:47.343870  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:47.511281  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:47.525152  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:47.618483  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:47.842313  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:48.019958  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:48.025029  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:48.117115  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:48.342965  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:48.510942  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:48.519678  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:48.617514  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:48.842353  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:49.010751  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:49.019274  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:49.116166  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:49.135327  415354 pod_ready.go:102] pod "metrics-server-7c66d45ddc-rhh5h" in "kube-system" namespace has status "Ready":"False"
	I0103 19:54:49.343613  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:49.510626  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:49.519650  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:49.616592  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:49.847355  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:50.018086  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:50.028601  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:50.118410  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:50.341829  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:50.510303  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:50.518786  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:50.617997  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:50.841957  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:51.010324  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:51.020545  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:51.117057  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:51.341418  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:51.510390  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:51.520531  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:51.616943  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:51.627275  415354 pod_ready.go:102] pod "metrics-server-7c66d45ddc-rhh5h" in "kube-system" namespace has status "Ready":"False"
	I0103 19:54:51.842786  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:52.011412  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:52.023553  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:52.117036  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:52.342409  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:52.511507  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:52.520074  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:52.616920  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:52.844716  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:53.031968  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:53.039650  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:53.116930  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:53.342975  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:53.510732  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:53.520186  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:53.617004  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:53.628973  415354 pod_ready.go:102] pod "metrics-server-7c66d45ddc-rhh5h" in "kube-system" namespace has status "Ready":"False"
	I0103 19:54:53.842432  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:54.014924  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:54.024258  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:54.120073  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:54.342825  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:54.510798  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:54.519862  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:54.618618  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:54.841938  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:55.010397  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:55.024989  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:55.119063  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:55.342016  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:55.511218  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:55.518861  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:55.629903  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:55.631994  415354 pod_ready.go:102] pod "metrics-server-7c66d45ddc-rhh5h" in "kube-system" namespace has status "Ready":"False"
	I0103 19:54:55.853756  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:56.014047  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:56.024813  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:56.118888  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:56.343820  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:56.513256  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:56.531649  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:56.619761  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:56.843649  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:57.012378  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:57.029789  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:57.116867  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:57.342957  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:57.513091  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:57.524045  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:57.619179  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:57.844161  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:58.017250  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:58.024106  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:58.117819  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:58.129083  415354 pod_ready.go:102] pod "metrics-server-7c66d45ddc-rhh5h" in "kube-system" namespace has status "Ready":"False"
	I0103 19:54:58.350736  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:58.511167  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:58.523755  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:58.617712  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:58.845500  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:59.011122  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:59.020067  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:59.117924  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:59.341700  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:54:59.511168  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:54:59.520116  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:54:59.617251  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:54:59.843334  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:00.011273  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:00.047432  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:00.255903  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:00.292810  415354 pod_ready.go:102] pod "metrics-server-7c66d45ddc-rhh5h" in "kube-system" namespace has status "Ready":"False"
	I0103 19:55:00.398733  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:00.513784  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:00.521152  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:00.617569  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:00.853611  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:01.011713  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:01.020804  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:01.116544  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:01.342419  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:01.512325  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:01.519941  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:01.617815  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:01.848421  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:02.012231  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:02.020801  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:02.117357  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:02.341893  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:02.518455  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:02.521383  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:02.616620  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:02.622993  415354 pod_ready.go:102] pod "metrics-server-7c66d45ddc-rhh5h" in "kube-system" namespace has status "Ready":"False"
	I0103 19:55:02.841929  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:03.010732  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:03.020517  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:03.116753  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:03.341916  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:03.510962  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:03.519915  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:03.619209  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:03.847503  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:04.011397  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:04.022873  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:04.117022  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:04.342796  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:04.537565  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:04.538464  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:04.617338  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:04.627617  415354 pod_ready.go:102] pod "metrics-server-7c66d45ddc-rhh5h" in "kube-system" namespace has status "Ready":"False"
	I0103 19:55:04.844172  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:05.010870  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:05.025338  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:05.117887  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:05.345256  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:05.527052  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:05.550807  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:05.617407  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:05.842769  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:06.013280  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:06.021385  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:06.118762  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:06.351282  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:06.547224  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:06.548044  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:06.683660  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:06.688621  415354 pod_ready.go:102] pod "metrics-server-7c66d45ddc-rhh5h" in "kube-system" namespace has status "Ready":"False"
	I0103 19:55:06.850172  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:07.012433  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:07.042090  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:07.125573  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:07.342553  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:07.512623  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:07.524548  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:07.616857  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:07.629255  415354 pod_ready.go:92] pod "metrics-server-7c66d45ddc-rhh5h" in "kube-system" namespace has status "Ready":"True"
	I0103 19:55:07.629292  415354 pod_ready.go:81] duration metric: took 20.513457138s waiting for pod "metrics-server-7c66d45ddc-rhh5h" in "kube-system" namespace to be "Ready" ...
	I0103 19:55:07.629305  415354 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-jv75d" in "kube-system" namespace to be "Ready" ...
	I0103 19:55:07.841732  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:08.011717  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:08.020816  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:08.117974  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:08.341539  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:08.512148  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:08.523295  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:08.616262  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:08.841419  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:09.016444  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:09.026697  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:09.116505  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:09.342424  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:09.510826  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:09.519945  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:09.617104  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:09.638501  415354 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jv75d" in "kube-system" namespace has status "Ready":"False"
	I0103 19:55:09.845430  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:10.012204  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:10.050993  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:10.124439  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:10.342542  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:10.511254  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:10.523450  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:10.617279  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:10.844361  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:11.011859  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:11.021606  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:11.117007  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:11.342584  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:11.510946  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:11.519929  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:11.616933  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:11.641447  415354 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jv75d" in "kube-system" namespace has status "Ready":"False"
	I0103 19:55:11.843259  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:12.022012  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:12.026657  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:12.116954  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:12.342821  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:12.511579  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:12.520085  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:12.617026  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:12.843431  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:13.012632  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:13.021010  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:13.116648  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:13.342752  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:13.511694  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:13.527582  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:13.617238  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:13.841916  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:14.017130  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:14.021161  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:14.116809  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:14.135915  415354 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jv75d" in "kube-system" namespace has status "Ready":"False"
	I0103 19:55:14.343865  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:14.510304  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:14.519802  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:14.624951  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:14.841373  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:15.011364  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:15.020999  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:15.123881  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:15.341913  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:15.510655  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:15.519149  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:15.619789  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:15.841787  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:16.017313  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:16.021490  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:16.116518  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:16.341721  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:16.512370  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:16.519853  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:16.623788  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:16.643838  415354 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jv75d" in "kube-system" namespace has status "Ready":"False"
	I0103 19:55:16.842376  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:17.012027  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:17.019934  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:17.117146  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:17.341963  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:17.510611  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:17.521301  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:17.619755  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:17.842403  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:18.020218  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:18.023696  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:18.116642  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:18.341830  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:18.510492  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:18.519287  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:18.618338  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:18.847548  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:19.016609  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:19.023385  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:19.119860  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:19.137264  415354 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jv75d" in "kube-system" namespace has status "Ready":"False"
	I0103 19:55:19.343001  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:19.510395  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:19.519548  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:19.616517  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:19.841837  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:20.017826  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:20.033116  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:20.116440  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:20.341803  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:20.509996  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:20.520394  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:20.616363  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:20.842213  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:21.011991  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:21.020654  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:21.116483  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:21.342616  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:21.510045  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:21.520100  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:21.616894  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:21.641317  415354 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jv75d" in "kube-system" namespace has status "Ready":"False"
	I0103 19:55:21.843762  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:22.011189  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:22.020595  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:22.117067  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:22.342132  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:22.511686  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:22.520743  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:22.617893  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:22.845380  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:23.012643  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:23.020558  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:23.117524  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:23.342742  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:23.511204  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:23.551585  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:23.636633  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:23.669935  415354 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jv75d" in "kube-system" namespace has status "Ready":"False"
	I0103 19:55:23.842091  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:24.011703  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:24.030357  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:24.116558  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:24.351883  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:24.510649  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:24.519819  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:24.616887  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:24.842406  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:25.011670  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:25.028500  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:25.117735  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:25.342420  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:25.511124  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:25.519057  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:25.618632  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:25.841612  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:26.014322  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:26.020515  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:26.120785  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:26.140776  415354 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-jv75d" in "kube-system" namespace has status "Ready":"True"
	I0103 19:55:26.140804  415354 pod_ready.go:81] duration metric: took 18.511490089s waiting for pod "nvidia-device-plugin-daemonset-jv75d" in "kube-system" namespace to be "Ready" ...
	I0103 19:55:26.140832  415354 pod_ready.go:38] duration metric: took 41.007226327s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 19:55:26.140847  415354 api_server.go:52] waiting for apiserver process to appear ...
	I0103 19:55:26.140878  415354 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 19:55:26.140942  415354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 19:55:26.342646  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:26.370889  415354 cri.go:89] found id: "1faade0b1ccbd549284464fc35d84dff7c39ac561e2c9d623dbddc19f4d3376f"
	I0103 19:55:26.370913  415354 cri.go:89] found id: ""
	I0103 19:55:26.370921  415354 logs.go:284] 1 containers: [1faade0b1ccbd549284464fc35d84dff7c39ac561e2c9d623dbddc19f4d3376f]
	I0103 19:55:26.370979  415354 ssh_runner.go:195] Run: which crictl
	I0103 19:55:26.387871  415354 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 19:55:26.387944  415354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 19:55:26.482938  415354 cri.go:89] found id: "5cabc9f1741c8b56aa9a19c0eab917f816d0cbf28813f861da55877f037eb09a"
	I0103 19:55:26.482964  415354 cri.go:89] found id: ""
	I0103 19:55:26.482978  415354 logs.go:284] 1 containers: [5cabc9f1741c8b56aa9a19c0eab917f816d0cbf28813f861da55877f037eb09a]
	I0103 19:55:26.483035  415354 ssh_runner.go:195] Run: which crictl
	I0103 19:55:26.492207  415354 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 19:55:26.492278  415354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 19:55:26.511212  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:26.519664  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:26.575907  415354 cri.go:89] found id: "d17c13b24aa08668db9f4c720f60666152f1dac205b5916e1ff1ec7cef705a8e"
	I0103 19:55:26.575969  415354 cri.go:89] found id: ""
	I0103 19:55:26.575999  415354 logs.go:284] 1 containers: [d17c13b24aa08668db9f4c720f60666152f1dac205b5916e1ff1ec7cef705a8e]
	I0103 19:55:26.576087  415354 ssh_runner.go:195] Run: which crictl
	I0103 19:55:26.592318  415354 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 19:55:26.592439  415354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 19:55:26.616282  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:26.675060  415354 cri.go:89] found id: "18d3ca7c1c09b5c77efad6ff4bb566db451858a7f6b103117707d9d92367ac64"
	I0103 19:55:26.675129  415354 cri.go:89] found id: ""
	I0103 19:55:26.675169  415354 logs.go:284] 1 containers: [18d3ca7c1c09b5c77efad6ff4bb566db451858a7f6b103117707d9d92367ac64]
	I0103 19:55:26.675263  415354 ssh_runner.go:195] Run: which crictl
	I0103 19:55:26.683027  415354 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 19:55:26.683148  415354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 19:55:26.764155  415354 cri.go:89] found id: "65e3dde0649c4a5f672399026ae7cb9d25107cc73959c6ed266f3c9efccf1395"
	I0103 19:55:26.764179  415354 cri.go:89] found id: ""
	I0103 19:55:26.764187  415354 logs.go:284] 1 containers: [65e3dde0649c4a5f672399026ae7cb9d25107cc73959c6ed266f3c9efccf1395]
	I0103 19:55:26.764254  415354 ssh_runner.go:195] Run: which crictl
	I0103 19:55:26.769746  415354 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 19:55:26.769863  415354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 19:55:26.842617  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:26.843437  415354 cri.go:89] found id: "70fe45f96a87a54c9f232221f19dc8528d556ca572561eff45b35fb4ce8de8c4"
	I0103 19:55:26.843486  415354 cri.go:89] found id: ""
	I0103 19:55:26.843506  415354 logs.go:284] 1 containers: [70fe45f96a87a54c9f232221f19dc8528d556ca572561eff45b35fb4ce8de8c4]
	I0103 19:55:26.843586  415354 ssh_runner.go:195] Run: which crictl
	I0103 19:55:26.858075  415354 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 19:55:26.858190  415354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 19:55:26.915425  415354 cri.go:89] found id: "6d02024b7b0a3949604c668502ffbd99fc70a3d509b9d1cce0f7eabdd9c75439"
	I0103 19:55:26.915494  415354 cri.go:89] found id: ""
	I0103 19:55:26.915515  415354 logs.go:284] 1 containers: [6d02024b7b0a3949604c668502ffbd99fc70a3d509b9d1cce0f7eabdd9c75439]
	I0103 19:55:26.915603  415354 ssh_runner.go:195] Run: which crictl
	I0103 19:55:26.921894  415354 logs.go:123] Gathering logs for dmesg ...
	I0103 19:55:26.921969  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 19:55:26.955307  415354 logs.go:123] Gathering logs for coredns [d17c13b24aa08668db9f4c720f60666152f1dac205b5916e1ff1ec7cef705a8e] ...
	I0103 19:55:26.955376  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d17c13b24aa08668db9f4c720f60666152f1dac205b5916e1ff1ec7cef705a8e"
	I0103 19:55:27.010933  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:27.024257  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:27.031071  415354 logs.go:123] Gathering logs for kube-controller-manager [70fe45f96a87a54c9f232221f19dc8528d556ca572561eff45b35fb4ce8de8c4] ...
	I0103 19:55:27.031106  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70fe45f96a87a54c9f232221f19dc8528d556ca572561eff45b35fb4ce8de8c4"
	I0103 19:55:27.117936  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:27.139358  415354 logs.go:123] Gathering logs for kindnet [6d02024b7b0a3949604c668502ffbd99fc70a3d509b9d1cce0f7eabdd9c75439] ...
	I0103 19:55:27.139399  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d02024b7b0a3949604c668502ffbd99fc70a3d509b9d1cce0f7eabdd9c75439"
	I0103 19:55:27.185495  415354 logs.go:123] Gathering logs for kubelet ...
	I0103 19:55:27.185522  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0103 19:55:27.252597  415354 logs.go:138] Found kubelet problem: Jan 03 19:54:44 addons-845596 kubelet[1346]: W0103 19:54:44.728593    1346 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-845596" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-845596' and this object
	W0103 19:55:27.252873  415354 logs.go:138] Found kubelet problem: Jan 03 19:54:44 addons-845596 kubelet[1346]: E0103 19:54:44.728674    1346 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-845596" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-845596' and this object
	I0103 19:55:27.287117  415354 logs.go:123] Gathering logs for describe nodes ...
	I0103 19:55:27.287147  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 19:55:27.342593  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:27.458429  415354 logs.go:123] Gathering logs for kube-apiserver [1faade0b1ccbd549284464fc35d84dff7c39ac561e2c9d623dbddc19f4d3376f] ...
	I0103 19:55:27.458461  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1faade0b1ccbd549284464fc35d84dff7c39ac561e2c9d623dbddc19f4d3376f"
	I0103 19:55:27.515791  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:27.522128  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:27.524989  415354 logs.go:123] Gathering logs for etcd [5cabc9f1741c8b56aa9a19c0eab917f816d0cbf28813f861da55877f037eb09a] ...
	I0103 19:55:27.525011  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cabc9f1741c8b56aa9a19c0eab917f816d0cbf28813f861da55877f037eb09a"
	I0103 19:55:27.576490  415354 logs.go:123] Gathering logs for kube-scheduler [18d3ca7c1c09b5c77efad6ff4bb566db451858a7f6b103117707d9d92367ac64] ...
	I0103 19:55:27.576521  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18d3ca7c1c09b5c77efad6ff4bb566db451858a7f6b103117707d9d92367ac64"
	I0103 19:55:27.618380  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:27.627873  415354 logs.go:123] Gathering logs for kube-proxy [65e3dde0649c4a5f672399026ae7cb9d25107cc73959c6ed266f3c9efccf1395] ...
	I0103 19:55:27.627908  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65e3dde0649c4a5f672399026ae7cb9d25107cc73959c6ed266f3c9efccf1395"
	I0103 19:55:27.690564  415354 logs.go:123] Gathering logs for CRI-O ...
	I0103 19:55:27.690594  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 19:55:27.784116  415354 logs.go:123] Gathering logs for container status ...
	I0103 19:55:27.784152  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 19:55:27.848500  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:27.858216  415354 out.go:309] Setting ErrFile to fd 2...
	I0103 19:55:27.858244  415354 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0103 19:55:27.858296  415354 out.go:239] X Problems detected in kubelet:
	W0103 19:55:27.858310  415354 out.go:239]   Jan 03 19:54:44 addons-845596 kubelet[1346]: W0103 19:54:44.728593    1346 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-845596" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-845596' and this object
	W0103 19:55:27.858325  415354 out.go:239]   Jan 03 19:54:44 addons-845596 kubelet[1346]: E0103 19:54:44.728674    1346 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-845596" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-845596' and this object
	I0103 19:55:27.858334  415354 out.go:309] Setting ErrFile to fd 2...
	I0103 19:55:27.858340  415354 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:55:28.016875  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:28.021758  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:28.116542  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:28.341886  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:28.511137  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:28.520009  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:28.615818  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:28.843643  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:29.011339  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:29.020606  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:29.116960  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:29.342312  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:29.510801  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:29.520078  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:29.616624  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:29.849388  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:30.012221  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:30.033467  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:30.118555  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:30.343084  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:30.511636  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:30.519072  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:30.617286  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:30.856878  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:31.011515  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:31.021841  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:31.116665  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:31.343799  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:31.511005  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:31.520437  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:31.617350  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:31.842747  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:32.019315  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:32.022131  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:32.117516  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:32.350696  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:32.513887  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:32.526387  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0103 19:55:32.618081  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:32.843077  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:33.011634  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:33.021047  415354 kapi.go:107] duration metric: took 1m14.006863959s to wait for kubernetes.io/minikube-addons=registry ...
	I0103 19:55:33.116941  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:33.341411  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:33.510192  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:33.616872  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:33.842362  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:34.016862  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:34.116476  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:34.341828  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:34.511016  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:34.617302  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:34.842185  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:35.013416  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:35.117636  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:35.343652  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:35.512400  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:35.616001  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0103 19:55:35.842626  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:36.010314  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:36.116580  415354 kapi.go:107] duration metric: took 1m13.004217437s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0103 19:55:36.118850  415354 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-845596 cluster.
	I0103 19:55:36.120814  415354 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0103 19:55:36.122292  415354 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0103 19:55:36.341832  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:36.510313  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:36.842544  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:37.013787  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:37.349475  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:37.513953  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:37.842605  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:37.858969  415354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 19:55:37.894545  415354 api_server.go:72] duration metric: took 1m24.439023354s to wait for apiserver process to appear ...
	I0103 19:55:37.894572  415354 api_server.go:88] waiting for apiserver healthz status ...
	I0103 19:55:37.894606  415354 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 19:55:37.894667  415354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 19:55:38.012277  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:38.092818  415354 cri.go:89] found id: "1faade0b1ccbd549284464fc35d84dff7c39ac561e2c9d623dbddc19f4d3376f"
	I0103 19:55:38.092841  415354 cri.go:89] found id: ""
	I0103 19:55:38.092849  415354 logs.go:284] 1 containers: [1faade0b1ccbd549284464fc35d84dff7c39ac561e2c9d623dbddc19f4d3376f]
	I0103 19:55:38.092914  415354 ssh_runner.go:195] Run: which crictl
	I0103 19:55:38.107188  415354 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 19:55:38.107265  415354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 19:55:38.189011  415354 cri.go:89] found id: "5cabc9f1741c8b56aa9a19c0eab917f816d0cbf28813f861da55877f037eb09a"
	I0103 19:55:38.189037  415354 cri.go:89] found id: ""
	I0103 19:55:38.189045  415354 logs.go:284] 1 containers: [5cabc9f1741c8b56aa9a19c0eab917f816d0cbf28813f861da55877f037eb09a]
	I0103 19:55:38.189101  415354 ssh_runner.go:195] Run: which crictl
	I0103 19:55:38.203203  415354 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 19:55:38.203280  415354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 19:55:38.274067  415354 cri.go:89] found id: "d17c13b24aa08668db9f4c720f60666152f1dac205b5916e1ff1ec7cef705a8e"
	I0103 19:55:38.274090  415354 cri.go:89] found id: ""
	I0103 19:55:38.274099  415354 logs.go:284] 1 containers: [d17c13b24aa08668db9f4c720f60666152f1dac205b5916e1ff1ec7cef705a8e]
	I0103 19:55:38.274162  415354 ssh_runner.go:195] Run: which crictl
	I0103 19:55:38.285503  415354 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 19:55:38.285572  415354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 19:55:38.342703  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:38.508259  415354 cri.go:89] found id: "18d3ca7c1c09b5c77efad6ff4bb566db451858a7f6b103117707d9d92367ac64"
	I0103 19:55:38.508281  415354 cri.go:89] found id: ""
	I0103 19:55:38.508289  415354 logs.go:284] 1 containers: [18d3ca7c1c09b5c77efad6ff4bb566db451858a7f6b103117707d9d92367ac64]
	I0103 19:55:38.508345  415354 ssh_runner.go:195] Run: which crictl
	I0103 19:55:38.511382  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:38.525588  415354 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 19:55:38.525658  415354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 19:55:38.735830  415354 cri.go:89] found id: "65e3dde0649c4a5f672399026ae7cb9d25107cc73959c6ed266f3c9efccf1395"
	I0103 19:55:38.735853  415354 cri.go:89] found id: ""
	I0103 19:55:38.735862  415354 logs.go:284] 1 containers: [65e3dde0649c4a5f672399026ae7cb9d25107cc73959c6ed266f3c9efccf1395]
	I0103 19:55:38.735924  415354 ssh_runner.go:195] Run: which crictl
	I0103 19:55:38.748697  415354 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 19:55:38.748776  415354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 19:55:38.837618  415354 cri.go:89] found id: "70fe45f96a87a54c9f232221f19dc8528d556ca572561eff45b35fb4ce8de8c4"
	I0103 19:55:38.837639  415354 cri.go:89] found id: ""
	I0103 19:55:38.837648  415354 logs.go:284] 1 containers: [70fe45f96a87a54c9f232221f19dc8528d556ca572561eff45b35fb4ce8de8c4]
	I0103 19:55:38.837702  415354 ssh_runner.go:195] Run: which crictl
	I0103 19:55:38.843978  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:38.849817  415354 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 19:55:38.849892  415354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 19:55:38.912892  415354 cri.go:89] found id: "6d02024b7b0a3949604c668502ffbd99fc70a3d509b9d1cce0f7eabdd9c75439"
	I0103 19:55:38.912914  415354 cri.go:89] found id: ""
	I0103 19:55:38.912921  415354 logs.go:284] 1 containers: [6d02024b7b0a3949604c668502ffbd99fc70a3d509b9d1cce0f7eabdd9c75439]
	I0103 19:55:38.912977  415354 ssh_runner.go:195] Run: which crictl
	I0103 19:55:38.923060  415354 logs.go:123] Gathering logs for kube-controller-manager [70fe45f96a87a54c9f232221f19dc8528d556ca572561eff45b35fb4ce8de8c4] ...
	I0103 19:55:38.923088  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70fe45f96a87a54c9f232221f19dc8528d556ca572561eff45b35fb4ce8de8c4"
	I0103 19:55:39.010144  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:39.022400  415354 logs.go:123] Gathering logs for container status ...
	I0103 19:55:39.022452  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 19:55:39.127811  415354 logs.go:123] Gathering logs for describe nodes ...
	I0103 19:55:39.127848  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 19:55:39.327370  415354 logs.go:123] Gathering logs for kube-proxy [65e3dde0649c4a5f672399026ae7cb9d25107cc73959c6ed266f3c9efccf1395] ...
	I0103 19:55:39.327406  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65e3dde0649c4a5f672399026ae7cb9d25107cc73959c6ed266f3c9efccf1395"
	I0103 19:55:39.342753  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:39.387136  415354 logs.go:123] Gathering logs for kube-apiserver [1faade0b1ccbd549284464fc35d84dff7c39ac561e2c9d623dbddc19f4d3376f] ...
	I0103 19:55:39.387167  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1faade0b1ccbd549284464fc35d84dff7c39ac561e2c9d623dbddc19f4d3376f"
	I0103 19:55:39.487759  415354 logs.go:123] Gathering logs for etcd [5cabc9f1741c8b56aa9a19c0eab917f816d0cbf28813f861da55877f037eb09a] ...
	I0103 19:55:39.487796  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cabc9f1741c8b56aa9a19c0eab917f816d0cbf28813f861da55877f037eb09a"
	I0103 19:55:39.510509  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:39.585741  415354 logs.go:123] Gathering logs for coredns [d17c13b24aa08668db9f4c720f60666152f1dac205b5916e1ff1ec7cef705a8e] ...
	I0103 19:55:39.585776  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d17c13b24aa08668db9f4c720f60666152f1dac205b5916e1ff1ec7cef705a8e"
	I0103 19:55:39.682013  415354 logs.go:123] Gathering logs for kube-scheduler [18d3ca7c1c09b5c77efad6ff4bb566db451858a7f6b103117707d9d92367ac64] ...
	I0103 19:55:39.682145  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18d3ca7c1c09b5c77efad6ff4bb566db451858a7f6b103117707d9d92367ac64"
	I0103 19:55:39.755828  415354 logs.go:123] Gathering logs for kindnet [6d02024b7b0a3949604c668502ffbd99fc70a3d509b9d1cce0f7eabdd9c75439] ...
	I0103 19:55:39.755897  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d02024b7b0a3949604c668502ffbd99fc70a3d509b9d1cce0f7eabdd9c75439"
	I0103 19:55:39.836954  415354 logs.go:123] Gathering logs for CRI-O ...
	I0103 19:55:39.836981  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 19:55:39.845339  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:39.990333  415354 logs.go:123] Gathering logs for kubelet ...
	I0103 19:55:39.990411  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0103 19:55:40.016344  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0103 19:55:40.061005  415354 logs.go:138] Found kubelet problem: Jan 03 19:54:44 addons-845596 kubelet[1346]: W0103 19:54:44.728593    1346 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-845596" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-845596' and this object
	W0103 19:55:40.061278  415354 logs.go:138] Found kubelet problem: Jan 03 19:54:44 addons-845596 kubelet[1346]: E0103 19:54:44.728674    1346 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-845596" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-845596' and this object
	I0103 19:55:40.104022  415354 logs.go:123] Gathering logs for dmesg ...
	I0103 19:55:40.104106  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 19:55:40.135537  415354 out.go:309] Setting ErrFile to fd 2...
	I0103 19:55:40.135625  415354 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0103 19:55:40.135714  415354 out.go:239] X Problems detected in kubelet:
	W0103 19:55:40.135758  415354 out.go:239]   Jan 03 19:54:44 addons-845596 kubelet[1346]: W0103 19:54:44.728593    1346 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-845596" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-845596' and this object
	W0103 19:55:40.135795  415354 out.go:239]   Jan 03 19:54:44 addons-845596 kubelet[1346]: E0103 19:54:44.728674    1346 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-845596" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-845596' and this object
	I0103 19:55:40.135854  415354 out.go:309] Setting ErrFile to fd 2...
	I0103 19:55:40.135886  415354 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:55:40.342074  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:40.511293  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:40.842258  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:41.014610  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:41.341248  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:41.512799  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:41.842421  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:42.011593  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:42.343227  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:42.510164  415354 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0103 19:55:42.853748  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:43.012058  415354 kapi.go:107] duration metric: took 1m24.006419267s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0103 19:55:43.341939  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:43.841381  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:44.341544  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:44.841437  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:45.348268  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:45.842126  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:46.343075  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:46.879475  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:47.341265  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:47.841043  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:48.342092  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:48.840991  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:49.341891  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:49.842367  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:50.136855  415354 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0103 19:55:50.158936  415354 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0103 19:55:50.160417  415354 api_server.go:141] control plane version: v1.28.4
	I0103 19:55:50.160447  415354 api_server.go:131] duration metric: took 12.265867895s to wait for apiserver health ...
	I0103 19:55:50.160457  415354 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 19:55:50.160480  415354 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0103 19:55:50.160550  415354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0103 19:55:50.231119  415354 cri.go:89] found id: "1faade0b1ccbd549284464fc35d84dff7c39ac561e2c9d623dbddc19f4d3376f"
	I0103 19:55:50.231140  415354 cri.go:89] found id: ""
	I0103 19:55:50.231147  415354 logs.go:284] 1 containers: [1faade0b1ccbd549284464fc35d84dff7c39ac561e2c9d623dbddc19f4d3376f]
	I0103 19:55:50.231205  415354 ssh_runner.go:195] Run: which crictl
	I0103 19:55:50.237119  415354 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0103 19:55:50.237192  415354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0103 19:55:50.340218  415354 cri.go:89] found id: "5cabc9f1741c8b56aa9a19c0eab917f816d0cbf28813f861da55877f037eb09a"
	I0103 19:55:50.340300  415354 cri.go:89] found id: ""
	I0103 19:55:50.340332  415354 logs.go:284] 1 containers: [5cabc9f1741c8b56aa9a19c0eab917f816d0cbf28813f861da55877f037eb09a]
	I0103 19:55:50.340477  415354 ssh_runner.go:195] Run: which crictl
	I0103 19:55:50.342322  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:50.349499  415354 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0103 19:55:50.349689  415354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0103 19:55:50.403874  415354 cri.go:89] found id: "d17c13b24aa08668db9f4c720f60666152f1dac205b5916e1ff1ec7cef705a8e"
	I0103 19:55:50.403944  415354 cri.go:89] found id: ""
	I0103 19:55:50.403964  415354 logs.go:284] 1 containers: [d17c13b24aa08668db9f4c720f60666152f1dac205b5916e1ff1ec7cef705a8e]
	I0103 19:55:50.404051  415354 ssh_runner.go:195] Run: which crictl
	I0103 19:55:50.420061  415354 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0103 19:55:50.420183  415354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0103 19:55:50.486001  415354 cri.go:89] found id: "18d3ca7c1c09b5c77efad6ff4bb566db451858a7f6b103117707d9d92367ac64"
	I0103 19:55:50.486071  415354 cri.go:89] found id: ""
	I0103 19:55:50.486093  415354 logs.go:284] 1 containers: [18d3ca7c1c09b5c77efad6ff4bb566db451858a7f6b103117707d9d92367ac64]
	I0103 19:55:50.486178  415354 ssh_runner.go:195] Run: which crictl
	I0103 19:55:50.502151  415354 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0103 19:55:50.502279  415354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0103 19:55:50.559264  415354 cri.go:89] found id: "65e3dde0649c4a5f672399026ae7cb9d25107cc73959c6ed266f3c9efccf1395"
	I0103 19:55:50.559334  415354 cri.go:89] found id: ""
	I0103 19:55:50.559354  415354 logs.go:284] 1 containers: [65e3dde0649c4a5f672399026ae7cb9d25107cc73959c6ed266f3c9efccf1395]
	I0103 19:55:50.559444  415354 ssh_runner.go:195] Run: which crictl
	I0103 19:55:50.564920  415354 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0103 19:55:50.565072  415354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0103 19:55:50.617648  415354 cri.go:89] found id: "70fe45f96a87a54c9f232221f19dc8528d556ca572561eff45b35fb4ce8de8c4"
	I0103 19:55:50.617725  415354 cri.go:89] found id: ""
	I0103 19:55:50.617746  415354 logs.go:284] 1 containers: [70fe45f96a87a54c9f232221f19dc8528d556ca572561eff45b35fb4ce8de8c4]
	I0103 19:55:50.617838  415354 ssh_runner.go:195] Run: which crictl
	I0103 19:55:50.630817  415354 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0103 19:55:50.630897  415354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0103 19:55:50.677397  415354 cri.go:89] found id: "6d02024b7b0a3949604c668502ffbd99fc70a3d509b9d1cce0f7eabdd9c75439"
	I0103 19:55:50.677467  415354 cri.go:89] found id: ""
	I0103 19:55:50.677489  415354 logs.go:284] 1 containers: [6d02024b7b0a3949604c668502ffbd99fc70a3d509b9d1cce0f7eabdd9c75439]
	I0103 19:55:50.677573  415354 ssh_runner.go:195] Run: which crictl
	I0103 19:55:50.683030  415354 logs.go:123] Gathering logs for dmesg ...
	I0103 19:55:50.683105  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0103 19:55:50.706369  415354 logs.go:123] Gathering logs for describe nodes ...
	I0103 19:55:50.706450  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0103 19:55:50.861188  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:50.945166  415354 logs.go:123] Gathering logs for kube-apiserver [1faade0b1ccbd549284464fc35d84dff7c39ac561e2c9d623dbddc19f4d3376f] ...
	I0103 19:55:50.945236  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1faade0b1ccbd549284464fc35d84dff7c39ac561e2c9d623dbddc19f4d3376f"
	I0103 19:55:51.070102  415354 logs.go:123] Gathering logs for kube-controller-manager [70fe45f96a87a54c9f232221f19dc8528d556ca572561eff45b35fb4ce8de8c4] ...
	I0103 19:55:51.070185  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70fe45f96a87a54c9f232221f19dc8528d556ca572561eff45b35fb4ce8de8c4"
	I0103 19:55:51.212804  415354 logs.go:123] Gathering logs for kindnet [6d02024b7b0a3949604c668502ffbd99fc70a3d509b9d1cce0f7eabdd9c75439] ...
	I0103 19:55:51.212880  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d02024b7b0a3949604c668502ffbd99fc70a3d509b9d1cce0f7eabdd9c75439"
	I0103 19:55:51.271682  415354 logs.go:123] Gathering logs for CRI-O ...
	I0103 19:55:51.271707  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0103 19:55:51.342545  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:51.396663  415354 logs.go:123] Gathering logs for kubelet ...
	I0103 19:55:51.396706  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0103 19:55:51.481218  415354 logs.go:138] Found kubelet problem: Jan 03 19:54:44 addons-845596 kubelet[1346]: W0103 19:54:44.728593    1346 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-845596" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-845596' and this object
	W0103 19:55:51.481490  415354 logs.go:138] Found kubelet problem: Jan 03 19:54:44 addons-845596 kubelet[1346]: E0103 19:54:44.728674    1346 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-845596" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-845596' and this object
	I0103 19:55:51.523962  415354 logs.go:123] Gathering logs for etcd [5cabc9f1741c8b56aa9a19c0eab917f816d0cbf28813f861da55877f037eb09a] ...
	I0103 19:55:51.524042  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cabc9f1741c8b56aa9a19c0eab917f816d0cbf28813f861da55877f037eb09a"
	I0103 19:55:51.601574  415354 logs.go:123] Gathering logs for coredns [d17c13b24aa08668db9f4c720f60666152f1dac205b5916e1ff1ec7cef705a8e] ...
	I0103 19:55:51.601653  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d17c13b24aa08668db9f4c720f60666152f1dac205b5916e1ff1ec7cef705a8e"
	I0103 19:55:51.656850  415354 logs.go:123] Gathering logs for kube-scheduler [18d3ca7c1c09b5c77efad6ff4bb566db451858a7f6b103117707d9d92367ac64] ...
	I0103 19:55:51.656881  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18d3ca7c1c09b5c77efad6ff4bb566db451858a7f6b103117707d9d92367ac64"
	I0103 19:55:51.713127  415354 logs.go:123] Gathering logs for kube-proxy [65e3dde0649c4a5f672399026ae7cb9d25107cc73959c6ed266f3c9efccf1395] ...
	I0103 19:55:51.713158  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65e3dde0649c4a5f672399026ae7cb9d25107cc73959c6ed266f3c9efccf1395"
	I0103 19:55:51.755378  415354 logs.go:123] Gathering logs for container status ...
	I0103 19:55:51.755408  415354 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0103 19:55:51.823405  415354 out.go:309] Setting ErrFile to fd 2...
	I0103 19:55:51.823432  415354 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0103 19:55:51.823504  415354 out.go:239] X Problems detected in kubelet:
	W0103 19:55:51.823518  415354 out.go:239]   Jan 03 19:54:44 addons-845596 kubelet[1346]: W0103 19:54:44.728593    1346 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-845596" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-845596' and this object
	W0103 19:55:51.823526  415354 out.go:239]   Jan 03 19:54:44 addons-845596 kubelet[1346]: E0103 19:54:44.728674    1346 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-845596" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-845596' and this object
	I0103 19:55:51.823546  415354 out.go:309] Setting ErrFile to fd 2...
	I0103 19:55:51.823554  415354 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:55:51.842031  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:52.341464  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:52.842265  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:53.341932  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:53.841219  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:54.342274  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:54.842073  415354 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0103 19:55:55.341723  415354 kapi.go:107] duration metric: took 1m36.005942341s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0103 19:55:55.343600  415354 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, ingress-dns, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0103 19:55:55.345249  415354 addons.go:508] enable addons completed in 1m42.991240397s: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin ingress-dns storage-provisioner-rancher inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0103 19:56:01.835943  415354 system_pods.go:59] 18 kube-system pods found
	I0103 19:56:01.835983  415354 system_pods.go:61] "coredns-5dd5756b68-kr7hh" [87608906-e989-4802-a1ca-e0824072dfac] Running
	I0103 19:56:01.835990  415354 system_pods.go:61] "csi-hostpath-attacher-0" [2f22b036-543f-4b1c-9026-9f543cc70300] Running
	I0103 19:56:01.835996  415354 system_pods.go:61] "csi-hostpath-resizer-0" [94fc1ce6-af92-45a6-9024-a3e6cb255ad6] Running
	I0103 19:56:01.836002  415354 system_pods.go:61] "csi-hostpathplugin-x7l5m" [d9f507dc-37ff-4555-a15f-5666246df460] Running
	I0103 19:56:01.836007  415354 system_pods.go:61] "etcd-addons-845596" [4d6fd541-319b-4536-ab4d-8769318e1cad] Running
	I0103 19:56:01.836014  415354 system_pods.go:61] "kindnet-hvxgx" [2c9d8262-dba6-42a3-abea-b8ed55cfbb2a] Running
	I0103 19:56:01.836019  415354 system_pods.go:61] "kube-apiserver-addons-845596" [ccedf3cd-3d59-4f21-b7e8-197268ad44b7] Running
	I0103 19:56:01.836025  415354 system_pods.go:61] "kube-controller-manager-addons-845596" [a481e05e-5306-431c-a166-aa330508a30a] Running
	I0103 19:56:01.836038  415354 system_pods.go:61] "kube-ingress-dns-minikube" [b6d8cd14-d18b-422c-b468-9692f5ce8618] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0103 19:56:01.836044  415354 system_pods.go:61] "kube-proxy-l9r8j" [bd7e07ad-a041-40b8-a7ec-45769bfbf075] Running
	I0103 19:56:01.836056  415354 system_pods.go:61] "kube-scheduler-addons-845596" [6e3e6c89-ba91-4f8f-9b85-50046bb4e8f5] Running
	I0103 19:56:01.836062  415354 system_pods.go:61] "metrics-server-7c66d45ddc-rhh5h" [9c2fc839-4c16-4364-aab4-4d2c62c7b4d5] Running
	I0103 19:56:01.836070  415354 system_pods.go:61] "nvidia-device-plugin-daemonset-jv75d" [d6c4dc1b-e6f6-4016-8a0c-c8156e31df4c] Running
	I0103 19:56:01.836075  415354 system_pods.go:61] "registry-hw4tv" [0a9a5b31-9d9e-49dd-aa9d-06cb07d586af] Running
	I0103 19:56:01.836083  415354 system_pods.go:61] "registry-proxy-hlftp" [8f750323-8b7d-46c9-b468-bf0deea921d1] Running
	I0103 19:56:01.836090  415354 system_pods.go:61] "snapshot-controller-58dbcc7b99-lnf5s" [3b99f199-b652-4caf-b06e-c0c6a5112204] Running
	I0103 19:56:01.836095  415354 system_pods.go:61] "snapshot-controller-58dbcc7b99-ngz7q" [e983e9de-fea8-4222-b8eb-fb26a57908a7] Running
	I0103 19:56:01.836102  415354 system_pods.go:61] "storage-provisioner" [2d1e04fd-d9e7-4818-9e46-ab36c4c692ca] Running
	I0103 19:56:01.836111  415354 system_pods.go:74] duration metric: took 11.675645942s to wait for pod list to return data ...
	I0103 19:56:01.836118  415354 default_sa.go:34] waiting for default service account to be created ...
	I0103 19:56:01.838688  415354 default_sa.go:45] found service account: "default"
	I0103 19:56:01.838718  415354 default_sa.go:55] duration metric: took 2.588632ms for default service account to be created ...
	I0103 19:56:01.838727  415354 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 19:56:01.848718  415354 system_pods.go:86] 18 kube-system pods found
	I0103 19:56:01.848756  415354 system_pods.go:89] "coredns-5dd5756b68-kr7hh" [87608906-e989-4802-a1ca-e0824072dfac] Running
	I0103 19:56:01.848764  415354 system_pods.go:89] "csi-hostpath-attacher-0" [2f22b036-543f-4b1c-9026-9f543cc70300] Running
	I0103 19:56:01.848770  415354 system_pods.go:89] "csi-hostpath-resizer-0" [94fc1ce6-af92-45a6-9024-a3e6cb255ad6] Running
	I0103 19:56:01.848775  415354 system_pods.go:89] "csi-hostpathplugin-x7l5m" [d9f507dc-37ff-4555-a15f-5666246df460] Running
	I0103 19:56:01.848779  415354 system_pods.go:89] "etcd-addons-845596" [4d6fd541-319b-4536-ab4d-8769318e1cad] Running
	I0103 19:56:01.848785  415354 system_pods.go:89] "kindnet-hvxgx" [2c9d8262-dba6-42a3-abea-b8ed55cfbb2a] Running
	I0103 19:56:01.848790  415354 system_pods.go:89] "kube-apiserver-addons-845596" [ccedf3cd-3d59-4f21-b7e8-197268ad44b7] Running
	I0103 19:56:01.848795  415354 system_pods.go:89] "kube-controller-manager-addons-845596" [a481e05e-5306-431c-a166-aa330508a30a] Running
	I0103 19:56:01.848811  415354 system_pods.go:89] "kube-ingress-dns-minikube" [b6d8cd14-d18b-422c-b468-9692f5ce8618] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0103 19:56:01.848821  415354 system_pods.go:89] "kube-proxy-l9r8j" [bd7e07ad-a041-40b8-a7ec-45769bfbf075] Running
	I0103 19:56:01.848827  415354 system_pods.go:89] "kube-scheduler-addons-845596" [6e3e6c89-ba91-4f8f-9b85-50046bb4e8f5] Running
	I0103 19:56:01.848831  415354 system_pods.go:89] "metrics-server-7c66d45ddc-rhh5h" [9c2fc839-4c16-4364-aab4-4d2c62c7b4d5] Running
	I0103 19:56:01.848837  415354 system_pods.go:89] "nvidia-device-plugin-daemonset-jv75d" [d6c4dc1b-e6f6-4016-8a0c-c8156e31df4c] Running
	I0103 19:56:01.848846  415354 system_pods.go:89] "registry-hw4tv" [0a9a5b31-9d9e-49dd-aa9d-06cb07d586af] Running
	I0103 19:56:01.848851  415354 system_pods.go:89] "registry-proxy-hlftp" [8f750323-8b7d-46c9-b468-bf0deea921d1] Running
	I0103 19:56:01.848856  415354 system_pods.go:89] "snapshot-controller-58dbcc7b99-lnf5s" [3b99f199-b652-4caf-b06e-c0c6a5112204] Running
	I0103 19:56:01.848860  415354 system_pods.go:89] "snapshot-controller-58dbcc7b99-ngz7q" [e983e9de-fea8-4222-b8eb-fb26a57908a7] Running
	I0103 19:56:01.848865  415354 system_pods.go:89] "storage-provisioner" [2d1e04fd-d9e7-4818-9e46-ab36c4c692ca] Running
	I0103 19:56:01.848874  415354 system_pods.go:126] duration metric: took 10.141407ms to wait for k8s-apps to be running ...
	I0103 19:56:01.848883  415354 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 19:56:01.848942  415354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 19:56:01.863172  415354 system_svc.go:56] duration metric: took 14.279177ms WaitForService to wait for kubelet.
	I0103 19:56:01.863196  415354 kubeadm.go:581] duration metric: took 1m48.407681158s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 19:56:01.863217  415354 node_conditions.go:102] verifying NodePressure condition ...
	I0103 19:56:01.866787  415354 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0103 19:56:01.866835  415354 node_conditions.go:123] node cpu capacity is 2
	I0103 19:56:01.866848  415354 node_conditions.go:105] duration metric: took 3.624311ms to run NodePressure ...
	I0103 19:56:01.866879  415354 start.go:228] waiting for startup goroutines ...
	I0103 19:56:01.866892  415354 start.go:233] waiting for cluster config update ...
	I0103 19:56:01.866908  415354 start.go:242] writing updated cluster config ...
	I0103 19:56:01.867211  415354 ssh_runner.go:195] Run: rm -f paused
	I0103 19:56:02.197866  415354 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0103 19:56:02.204110  415354 out.go:177] * Done! kubectl is now configured to use "addons-845596" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 03 20:00:02 addons-845596 crio[883]: time="2024-01-03 20:00:02.830861665Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=fdb2da93-d47c-47ff-80e5-5c31ec1385a5 name=/runtime.v1.ImageService/ImageStatus
	Jan 03 20:00:02 addons-845596 crio[883]: time="2024-01-03 20:00:02.831057635Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=fdb2da93-d47c-47ff-80e5-5c31ec1385a5 name=/runtime.v1.ImageService/ImageStatus
	Jan 03 20:00:02 addons-845596 crio[883]: time="2024-01-03 20:00:02.831811503Z" level=info msg="Creating container: default/hello-world-app-5d77478584-dh66t/hello-world-app" id=90a31ed6-4c67-4ea7-aa90-3d7440d230b4 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 03 20:00:02 addons-845596 crio[883]: time="2024-01-03 20:00:02.831908979Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 03 20:00:02 addons-845596 crio[883]: time="2024-01-03 20:00:02.905397544Z" level=info msg="Created container a58fb13d904c407fd9fd31a5498dbae3874c9afdf4fabfb611348214f1d95cb4: default/hello-world-app-5d77478584-dh66t/hello-world-app" id=90a31ed6-4c67-4ea7-aa90-3d7440d230b4 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 03 20:00:02 addons-845596 crio[883]: time="2024-01-03 20:00:02.906279879Z" level=info msg="Starting container: a58fb13d904c407fd9fd31a5498dbae3874c9afdf4fabfb611348214f1d95cb4" id=bd96342f-7173-46c2-b798-f13294bd523f name=/runtime.v1.RuntimeService/StartContainer
	Jan 03 20:00:02 addons-845596 crio[883]: time="2024-01-03 20:00:02.919975499Z" level=info msg="Started container" PID=8181 containerID=a58fb13d904c407fd9fd31a5498dbae3874c9afdf4fabfb611348214f1d95cb4 description=default/hello-world-app-5d77478584-dh66t/hello-world-app id=bd96342f-7173-46c2-b798-f13294bd523f name=/runtime.v1.RuntimeService/StartContainer sandboxID=e11bb088940503c43fd5ba4caedde2a7d9db829a86ff36b76a684a311133e639
	Jan 03 20:00:02 addons-845596 conmon[8170]: conmon a58fb13d904c407fd9fd <ninfo>: container 8181 exited with status 1
	Jan 03 20:00:02 addons-845596 crio[883]: time="2024-01-03 20:00:02.978004439Z" level=info msg="Stopping container: c42b5554fe5e60aea41844d3c6db7c4e9e21b185df25e1e725c7b7fb59a74071 (timeout: 2s)" id=cb412f24-a84c-48b8-841d-773a873464eb name=/runtime.v1.RuntimeService/StopContainer
	Jan 03 20:00:03 addons-845596 crio[883]: time="2024-01-03 20:00:03.762425042Z" level=info msg="Removing container: 3e43ddeca5a27f0b53e1973e63f03c4e3f05faa3d71554c2e212d39187d53830" id=cf15b62c-91a0-4e8b-88ba-432b6ce13701 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 03 20:00:03 addons-845596 crio[883]: time="2024-01-03 20:00:03.795981227Z" level=info msg="Removed container 3e43ddeca5a27f0b53e1973e63f03c4e3f05faa3d71554c2e212d39187d53830: default/hello-world-app-5d77478584-dh66t/hello-world-app" id=cf15b62c-91a0-4e8b-88ba-432b6ce13701 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 03 20:00:04 addons-845596 crio[883]: time="2024-01-03 20:00:04.985653307Z" level=warning msg="Stopping container c42b5554fe5e60aea41844d3c6db7c4e9e21b185df25e1e725c7b7fb59a74071 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=cb412f24-a84c-48b8-841d-773a873464eb name=/runtime.v1.RuntimeService/StopContainer
	Jan 03 20:00:05 addons-845596 conmon[4940]: conmon c42b5554fe5e60aea418 <ninfo>: container 4951 exited with status 137
	Jan 03 20:00:05 addons-845596 crio[883]: time="2024-01-03 20:00:05.133322349Z" level=info msg="Stopped container c42b5554fe5e60aea41844d3c6db7c4e9e21b185df25e1e725c7b7fb59a74071: ingress-nginx/ingress-nginx-controller-69cff4fd79-z9xnc/controller" id=cb412f24-a84c-48b8-841d-773a873464eb name=/runtime.v1.RuntimeService/StopContainer
	Jan 03 20:00:05 addons-845596 crio[883]: time="2024-01-03 20:00:05.133943755Z" level=info msg="Stopping pod sandbox: 2802633f7cde6748e2124f592e505904ea6a67a2d0fda9b5b3cb4a548cca5fb3" id=e2a03949-0385-46f4-a1ae-a48163140135 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 03 20:00:05 addons-845596 crio[883]: time="2024-01-03 20:00:05.137808541Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-N4A4LEURTGQWEW3Q - [0:0]\n:KUBE-HP-3F5E234SKEJIOL6Q - [0:0]\n-X KUBE-HP-3F5E234SKEJIOL6Q\n-X KUBE-HP-N4A4LEURTGQWEW3Q\nCOMMIT\n"
	Jan 03 20:00:05 addons-845596 crio[883]: time="2024-01-03 20:00:05.139627666Z" level=info msg="Closing host port tcp:80"
	Jan 03 20:00:05 addons-845596 crio[883]: time="2024-01-03 20:00:05.139685568Z" level=info msg="Closing host port tcp:443"
	Jan 03 20:00:05 addons-845596 crio[883]: time="2024-01-03 20:00:05.141346519Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jan 03 20:00:05 addons-845596 crio[883]: time="2024-01-03 20:00:05.141384031Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jan 03 20:00:05 addons-845596 crio[883]: time="2024-01-03 20:00:05.141589580Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-69cff4fd79-z9xnc Namespace:ingress-nginx ID:2802633f7cde6748e2124f592e505904ea6a67a2d0fda9b5b3cb4a548cca5fb3 UID:f4171327-ea32-47bb-8816-1e26ec07e30b NetNS:/var/run/netns/9a87c098-7655-4567-b2a4-dfcde6e2d707 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 03 20:00:05 addons-845596 crio[883]: time="2024-01-03 20:00:05.141732510Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-69cff4fd79-z9xnc from CNI network \"kindnet\" (type=ptp)"
	Jan 03 20:00:05 addons-845596 crio[883]: time="2024-01-03 20:00:05.168431252Z" level=info msg="Stopped pod sandbox: 2802633f7cde6748e2124f592e505904ea6a67a2d0fda9b5b3cb4a548cca5fb3" id=e2a03949-0385-46f4-a1ae-a48163140135 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 03 20:00:05 addons-845596 crio[883]: time="2024-01-03 20:00:05.768756547Z" level=info msg="Removing container: c42b5554fe5e60aea41844d3c6db7c4e9e21b185df25e1e725c7b7fb59a74071" id=95c3e501-54f3-4a16-ad7c-df7ab877e541 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 03 20:00:05 addons-845596 crio[883]: time="2024-01-03 20:00:05.784336592Z" level=info msg="Removed container c42b5554fe5e60aea41844d3c6db7c4e9e21b185df25e1e725c7b7fb59a74071: ingress-nginx/ingress-nginx-controller-69cff4fd79-z9xnc/controller" id=95c3e501-54f3-4a16-ad7c-df7ab877e541 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a58fb13d904c4       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                             7 seconds ago       Exited              hello-world-app           2                   e11bb08894050       hello-world-app-5d77478584-dh66t
	3e35117d4a605       docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb                              2 minutes ago       Running             nginx                     0                   a93c5b73cefc1       nginx
	9a72afd98c4b8       ghcr.io/headlamp-k8s/headlamp@sha256:0fe50c48c186b89ff3d341dba427174d8232a64c3062af5de854a3a7cb2105ce                        4 minutes ago       Running             headlamp                  0                   fe7a2a776e825       headlamp-7ddfbb94ff-8bc2f
	f20ed905f25f9       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                 4 minutes ago       Running             gcp-auth                  0                   15396df3ae227       gcp-auth-d4c87556c-d9p5n
	1770b382668f0       af594c6a879f2e441ea446a122296abbbe11aae5547e780f2582fbcda5df271c                                                             4 minutes ago       Exited              patch                     2                   5451df42d9df6       ingress-nginx-admission-patch-9tfp8
	d8b25e6ccaba2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5   4 minutes ago       Exited              create                    0                   ce01853847887       ingress-nginx-admission-create-jg9m4
	91c7c42d66bbe       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              5 minutes ago       Running             yakd                      0                   402dfb62c887e       yakd-dashboard-9947fc6bf-j4krk
	d17c13b24aa08       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             5 minutes ago       Running             coredns                   0                   8bbf3ad84bb47       coredns-5dd5756b68-kr7hh
	3b71c690ac7ce       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago       Running             storage-provisioner       0                   5121c6fb28b81       storage-provisioner
	65e3dde0649c4       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                                             5 minutes ago       Running             kube-proxy                0                   6fa33d693bb79       kube-proxy-l9r8j
	6d02024b7b0a3       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                             5 minutes ago       Running             kindnet-cni               0                   9decc6cf8dc73       kindnet-hvxgx
	5cabc9f1741c8       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                             6 minutes ago       Running             etcd                      0                   f03835af60cc5       etcd-addons-845596
	70fe45f96a87a       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                                             6 minutes ago       Running             kube-controller-manager   0                   e6949791dc214       kube-controller-manager-addons-845596
	18d3ca7c1c09b       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                                             6 minutes ago       Running             kube-scheduler            0                   f94f929013fb5       kube-scheduler-addons-845596
	1faade0b1ccbd       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                                             6 minutes ago       Running             kube-apiserver            0                   01deb7965a47a       kube-apiserver-addons-845596
	
	
	==> coredns [d17c13b24aa08668db9f4c720f60666152f1dac205b5916e1ff1ec7cef705a8e] <==
	[INFO] 10.244.0.19:50418 - 23377 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040352s
	[INFO] 10.244.0.19:50418 - 34039 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001623042s
	[INFO] 10.244.0.19:46741 - 64338 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.0023385s
	[INFO] 10.244.0.19:46741 - 43194 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001877637s
	[INFO] 10.244.0.19:50418 - 40191 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001525088s
	[INFO] 10.244.0.19:50418 - 24642 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000126129s
	[INFO] 10.244.0.19:46741 - 1193 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.0000448s
	[INFO] 10.244.0.19:33771 - 57654 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000105156s
	[INFO] 10.244.0.19:50237 - 57232 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000069907s
	[INFO] 10.244.0.19:33771 - 60238 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000065173s
	[INFO] 10.244.0.19:50237 - 42428 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000043028s
	[INFO] 10.244.0.19:33771 - 60433 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000042338s
	[INFO] 10.244.0.19:50237 - 33288 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000039228s
	[INFO] 10.244.0.19:33771 - 35032 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040796s
	[INFO] 10.244.0.19:50237 - 5891 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000039204s
	[INFO] 10.244.0.19:33771 - 4193 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000044972s
	[INFO] 10.244.0.19:50237 - 3139 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0000459s
	[INFO] 10.244.0.19:50237 - 38929 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041985s
	[INFO] 10.244.0.19:33771 - 25708 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000045169s
	[INFO] 10.244.0.19:50237 - 41019 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001399026s
	[INFO] 10.244.0.19:33771 - 55739 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001612514s
	[INFO] 10.244.0.19:50237 - 7104 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001171687s
	[INFO] 10.244.0.19:33771 - 51842 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000873924s
	[INFO] 10.244.0.19:50237 - 36703 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060021s
	[INFO] 10.244.0.19:33771 - 53946 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000045867s
	
	
	==> describe nodes <==
	Name:               addons-845596
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-845596
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=addons-845596
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_03T19_53_59_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-845596
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 19:53:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-845596
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jan 2024 20:00:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 20:00:05 +0000   Wed, 03 Jan 2024 19:53:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 20:00:05 +0000   Wed, 03 Jan 2024 19:53:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 20:00:05 +0000   Wed, 03 Jan 2024 19:53:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 20:00:05 +0000   Wed, 03 Jan 2024 19:54:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-845596
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 95c3f879ccfd436abe08ecb8cf8236ac
	  System UUID:                12c58293-ece0-4768-959e-4144430ee631
	  Boot ID:                    75f8dc93-969c-4083-a399-3fa01ac68612
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-dh66t         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  gcp-auth                    gcp-auth-d4c87556c-d9p5n                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m47s
	  headlamp                    headlamp-7ddfbb94ff-8bc2f                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 coredns-5dd5756b68-kr7hh                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m58s
	  kube-system                 etcd-addons-845596                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m11s
	  kube-system                 kindnet-hvxgx                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m59s
	  kube-system                 kube-apiserver-addons-845596             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m11s
	  kube-system                 kube-controller-manager-addons-845596    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m13s
	  kube-system                 kube-proxy-l9r8j                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m59s
	  kube-system                 kube-scheduler-addons-845596             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m52s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-j4krk           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     5m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m51s                  kube-proxy       
	  Normal  Starting                 6m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m20s (x8 over 6m20s)  kubelet          Node addons-845596 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m20s (x8 over 6m20s)  kubelet          Node addons-845596 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m20s (x8 over 6m20s)  kubelet          Node addons-845596 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m12s                  kubelet          Node addons-845596 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m12s                  kubelet          Node addons-845596 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m12s                  kubelet          Node addons-845596 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m59s                  node-controller  Node addons-845596 event: Registered Node addons-845596 in Controller
	  Normal  NodeReady                5m26s                  kubelet          Node addons-845596 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001035] FS-Cache: N-cookie d=000000008497ac2d{9p.inode} n=00000000078da515
	[  +0.001144] FS-Cache: N-key=[8] '97cfc90000000000'
	[  +0.002635] FS-Cache: Duplicate cookie detected
	[  +0.000780] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001095] FS-Cache: O-cookie d=000000008497ac2d{9p.inode} n=000000001cd35e83
	[  +0.001204] FS-Cache: O-key=[8] '97cfc90000000000'
	[  +0.000787] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.001055] FS-Cache: N-cookie d=000000008497ac2d{9p.inode} n=00000000438832ec
	[  +0.001161] FS-Cache: N-key=[8] '97cfc90000000000'
	[  +2.434204] FS-Cache: Duplicate cookie detected
	[  +0.000775] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001070] FS-Cache: O-cookie d=000000008497ac2d{9p.inode} n=0000000015296e21
	[  +0.001279] FS-Cache: O-key=[8] '96cfc90000000000'
	[  +0.000810] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.001053] FS-Cache: N-cookie d=000000008497ac2d{9p.inode} n=00000000078da515
	[  +0.001210] FS-Cache: N-key=[8] '96cfc90000000000'
	[  +0.409781] FS-Cache: Duplicate cookie detected
	[  +0.000780] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001099] FS-Cache: O-cookie d=000000008497ac2d{9p.inode} n=00000000c4559084
	[  +0.001216] FS-Cache: O-key=[8] '9ecfc90000000000'
	[  +0.000779] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.001049] FS-Cache: N-cookie d=000000008497ac2d{9p.inode} n=000000009f261bd8
	[  +0.001152] FS-Cache: N-key=[8] '9ecfc90000000000'
	[Jan 3 18:49] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan 3 19:43] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	
	
	==> etcd [5cabc9f1741c8b56aa9a19c0eab917f816d0cbf28813f861da55877f037eb09a] <==
	{"level":"info","ts":"2024-01-03T19:53:51.830548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-03T19:53:51.830677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-03T19:53:51.830721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-01-03T19:53:51.830766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-01-03T19:53:51.830803Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-03T19:53:51.830844Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-01-03T19:53:51.830877Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-03T19:53:51.838225Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-845596 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-03T19:53:51.838339Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T19:53:51.839362Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-03T19:53:51.839527Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T19:53:51.84263Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T19:53:51.843565Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-01-03T19:53:51.846862Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-03T19:53:51.846926Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-03T19:53:51.842615Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T19:53:51.901814Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T19:53:51.901849Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T19:54:13.065887Z","caller":"traceutil/trace.go:171","msg":"trace[1763584773] transaction","detail":"{read_only:false; number_of_response:1; response_revision:400; }","duration":"179.288962ms","start":"2024-01-03T19:54:12.886584Z","end":"2024-01-03T19:54:13.065873Z","steps":["trace[1763584773] 'process raft request'  (duration: 179.17871ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T19:54:14.118951Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.069979ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-03T19:54:14.122683Z","caller":"traceutil/trace.go:171","msg":"trace[246465311] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:404; }","duration":"172.803548ms","start":"2024-01-03T19:54:13.949859Z","end":"2024-01-03T19:54:14.122663Z","steps":["trace[246465311] 'range keys from in-memory index tree'  (duration: 168.932397ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-03T19:54:14.145905Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.921674ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2024-01-03T19:54:14.16264Z","caller":"traceutil/trace.go:171","msg":"trace[1656329864] transaction","detail":"{read_only:false; response_revision:405; number_of_response:1; }","duration":"212.510364ms","start":"2024-01-03T19:54:13.950108Z","end":"2024-01-03T19:54:14.162619Z","steps":["trace[1656329864] 'process raft request'  (duration: 168.54167ms)","trace[1656329864] 'compare'  (duration: 28.796595ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-03T19:54:14.230898Z","caller":"traceutil/trace.go:171","msg":"trace[1175646675] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:404; }","duration":"196.022194ms","start":"2024-01-03T19:54:13.949932Z","end":"2024-01-03T19:54:14.145954Z","steps":["trace[1175646675] 'range keys from in-memory index tree'  (duration: 152.141827ms)","trace[1175646675] 'range keys from bolt db'  (duration: 43.727539ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-03T19:54:16.259227Z","caller":"traceutil/trace.go:171","msg":"trace[1811863445] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"109.860576ms","start":"2024-01-03T19:54:16.149263Z","end":"2024-01-03T19:54:16.259124Z","steps":["trace[1811863445] 'process raft request'  (duration: 109.752736ms)"],"step_count":1}
	
	
	==> gcp-auth [f20ed905f25f9e821773df1816a5235e7c135156ca526edbb25428c536830ad4] <==
	2024/01/03 19:55:35 GCP Auth Webhook started!
	2024/01/03 19:56:03 Ready to marshal response ...
	2024/01/03 19:56:03 Ready to write response ...
	2024/01/03 19:56:03 Ready to marshal response ...
	2024/01/03 19:56:03 Ready to write response ...
	2024/01/03 19:56:03 Ready to marshal response ...
	2024/01/03 19:56:03 Ready to write response ...
	2024/01/03 19:56:14 Ready to marshal response ...
	2024/01/03 19:56:14 Ready to write response ...
	2024/01/03 19:56:21 Ready to marshal response ...
	2024/01/03 19:56:21 Ready to write response ...
	2024/01/03 19:56:21 Ready to marshal response ...
	2024/01/03 19:56:21 Ready to write response ...
	2024/01/03 19:56:30 Ready to marshal response ...
	2024/01/03 19:56:30 Ready to write response ...
	2024/01/03 19:56:38 Ready to marshal response ...
	2024/01/03 19:56:38 Ready to write response ...
	2024/01/03 19:56:59 Ready to marshal response ...
	2024/01/03 19:56:59 Ready to write response ...
	2024/01/03 19:57:24 Ready to marshal response ...
	2024/01/03 19:57:24 Ready to write response ...
	2024/01/03 19:59:43 Ready to marshal response ...
	2024/01/03 19:59:43 Ready to write response ...
	
	
	==> kernel <==
	 20:00:10 up  1:42,  0 users,  load average: 0.45, 1.58, 2.51
	Linux addons-845596 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [6d02024b7b0a3949604c668502ffbd99fc70a3d509b9d1cce0f7eabdd9c75439] <==
	I0103 19:58:04.802007       1 main.go:227] handling current node
	I0103 19:58:14.814626       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:58:14.814732       1 main.go:227] handling current node
	I0103 19:58:24.826234       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:58:24.826261       1 main.go:227] handling current node
	I0103 19:58:34.831015       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:58:34.831045       1 main.go:227] handling current node
	I0103 19:58:44.835361       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:58:44.835390       1 main.go:227] handling current node
	I0103 19:58:54.845451       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:58:54.845589       1 main.go:227] handling current node
	I0103 19:59:04.849197       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:59:04.849227       1 main.go:227] handling current node
	I0103 19:59:14.853034       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:59:14.853061       1 main.go:227] handling current node
	I0103 19:59:24.857546       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:59:24.857574       1 main.go:227] handling current node
	I0103 19:59:34.870305       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:59:34.870332       1 main.go:227] handling current node
	I0103 19:59:44.874323       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:59:44.874350       1 main.go:227] handling current node
	I0103 19:59:54.881769       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 19:59:54.881797       1 main.go:227] handling current node
	I0103 20:00:04.894782       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 20:00:04.894886       1 main.go:227] handling current node
	
	
	==> kube-apiserver [1faade0b1ccbd549284464fc35d84dff7c39ac561e2c9d623dbddc19f4d3376f] <==
	I0103 19:57:16.479160       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0103 19:57:16.488945       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0103 19:57:16.489009       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0103 19:57:16.489928       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0103 19:57:16.489979       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0103 19:57:16.516885       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0103 19:57:16.516946       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0103 19:57:16.524477       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0103 19:57:16.524524       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0103 19:57:16.534645       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0103 19:57:16.534699       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0103 19:57:16.547859       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0103 19:57:16.547920       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0103 19:57:16.559510       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0103 19:57:16.559901       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0103 19:57:17.524721       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0103 19:57:17.560340       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0103 19:57:17.572768       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0103 19:57:20.263453       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0103 19:57:20.274791       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0103 19:57:21.293736       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0103 19:57:23.901542       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0103 19:57:24.153955       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.8.181"}
	I0103 19:58:08.591955       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0103 19:59:44.213709       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.67.115"}
	
	
	==> kube-controller-manager [70fe45f96a87a54c9f232221f19dc8528d556ca572561eff45b35fb4ce8de8c4] <==
	E0103 19:59:14.177209       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0103 19:59:21.761174       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0103 19:59:21.761212       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0103 19:59:23.373689       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0103 19:59:23.373721       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0103 19:59:34.569368       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0103 19:59:34.569481       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0103 19:59:43.954641       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0103 19:59:44.001411       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-dh66t"
	I0103 19:59:44.011806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="57.955085ms"
	I0103 19:59:44.039894       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="28.019438ms"
	I0103 19:59:44.040586       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="53.801µs"
	I0103 19:59:47.735613       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="69.604µs"
	I0103 19:59:48.740327       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="44.012µs"
	I0103 19:59:49.740978       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="77.292µs"
	W0103 19:59:56.946471       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0103 19:59:56.946636       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0103 20:00:01.940187       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0103 20:00:01.948958       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0103 20:00:01.949534       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="6.605µs"
	W0103 20:00:02.326430       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0103 20:00:02.326468       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0103 20:00:03.794382       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="133.743µs"
	W0103 20:00:09.174760       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0103 20:00:09.174797       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [65e3dde0649c4a5f672399026ae7cb9d25107cc73959c6ed266f3c9efccf1395] <==
	I0103 19:54:18.239583       1 server_others.go:69] "Using iptables proxy"
	I0103 19:54:18.435339       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0103 19:54:18.627831       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0103 19:54:18.630198       1 server_others.go:152] "Using iptables Proxier"
	I0103 19:54:18.630306       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0103 19:54:18.630339       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0103 19:54:18.630469       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0103 19:54:18.630737       1 server.go:846] "Version info" version="v1.28.4"
	I0103 19:54:18.631003       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 19:54:18.633135       1 config.go:188] "Starting service config controller"
	I0103 19:54:18.633271       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0103 19:54:18.633342       1 config.go:97] "Starting endpoint slice config controller"
	I0103 19:54:18.633373       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0103 19:54:18.633935       1 config.go:315] "Starting node config controller"
	I0103 19:54:18.634000       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0103 19:54:18.735264       1 shared_informer.go:318] Caches are synced for node config
	I0103 19:54:18.735363       1 shared_informer.go:318] Caches are synced for service config
	I0103 19:54:18.735379       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [18d3ca7c1c09b5c77efad6ff4bb566db451858a7f6b103117707d9d92367ac64] <==
	W0103 19:53:56.304675       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0103 19:53:56.305550       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0103 19:53:56.304740       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0103 19:53:56.305631       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0103 19:53:56.304920       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0103 19:53:56.305709       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0103 19:53:56.304975       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0103 19:53:56.305788       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0103 19:53:56.305009       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0103 19:53:56.305877       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0103 19:53:56.305040       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0103 19:53:56.305960       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0103 19:53:56.305074       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0103 19:53:56.306063       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0103 19:53:56.305108       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0103 19:53:56.306144       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0103 19:53:56.305141       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0103 19:53:56.305169       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0103 19:53:56.305221       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0103 19:53:56.305266       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0103 19:53:56.306265       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0103 19:53:56.306313       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0103 19:53:56.306368       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0103 19:53:56.306429       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0103 19:53:57.875537       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 03 19:59:59 addons-845596 kubelet[1346]: E0103 19:59:59.087776    1346 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/ccf4f0d1fff234a603c847da96849eb4c4b58787b901b821a5dd18e839065c39/diff" to get inode usage: stat /var/lib/containers/storage/overlay/ccf4f0d1fff234a603c847da96849eb4c4b58787b901b821a5dd18e839065c39/diff: no such file or directory, extraDiskErr: <nil>
	Jan 03 19:59:59 addons-845596 kubelet[1346]: E0103 19:59:59.091006    1346 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/ceaaa5dce02d0550a344d5923022f506ddad3440505d72e15fb9d03106128fd0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/ceaaa5dce02d0550a344d5923022f506ddad3440505d72e15fb9d03106128fd0/diff: no such file or directory, extraDiskErr: <nil>
	Jan 03 19:59:59 addons-845596 kubelet[1346]: E0103 19:59:59.092148    1346 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/aa4d6abc62f848b233e44b402aaa627ef57a4332bc86a8e1da40686e6b2cc23f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/aa4d6abc62f848b233e44b402aaa627ef57a4332bc86a8e1da40686e6b2cc23f/diff: no such file or directory, extraDiskErr: <nil>
	Jan 03 20:00:00 addons-845596 kubelet[1346]: I0103 20:00:00.393881    1346 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2t5b\" (UniqueName: \"kubernetes.io/projected/b6d8cd14-d18b-422c-b468-9692f5ce8618-kube-api-access-q2t5b\") pod \"b6d8cd14-d18b-422c-b468-9692f5ce8618\" (UID: \"b6d8cd14-d18b-422c-b468-9692f5ce8618\") "
	Jan 03 20:00:00 addons-845596 kubelet[1346]: I0103 20:00:00.402183    1346 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6d8cd14-d18b-422c-b468-9692f5ce8618-kube-api-access-q2t5b" (OuterVolumeSpecName: "kube-api-access-q2t5b") pod "b6d8cd14-d18b-422c-b468-9692f5ce8618" (UID: "b6d8cd14-d18b-422c-b468-9692f5ce8618"). InnerVolumeSpecName "kube-api-access-q2t5b". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 03 20:00:00 addons-845596 kubelet[1346]: I0103 20:00:00.502660    1346 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-q2t5b\" (UniqueName: \"kubernetes.io/projected/b6d8cd14-d18b-422c-b468-9692f5ce8618-kube-api-access-q2t5b\") on node \"addons-845596\" DevicePath \"\""
	Jan 03 20:00:00 addons-845596 kubelet[1346]: I0103 20:00:00.750506    1346 scope.go:117] "RemoveContainer" containerID="0329beb8f0aaf70c29bbbbcb7566d47d0e492d5d0fd4f052f4d1ec1de00101e7"
	Jan 03 20:00:02 addons-845596 kubelet[1346]: I0103 20:00:02.828793    1346 scope.go:117] "RemoveContainer" containerID="3e43ddeca5a27f0b53e1973e63f03c4e3f05faa3d71554c2e212d39187d53830"
	Jan 03 20:00:02 addons-845596 kubelet[1346]: I0103 20:00:02.831276    1346 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="21e6dd15-2723-4ade-8774-e1c4ab463dda" path="/var/lib/kubelet/pods/21e6dd15-2723-4ade-8774-e1c4ab463dda/volumes"
	Jan 03 20:00:02 addons-845596 kubelet[1346]: I0103 20:00:02.835010    1346 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b6d8cd14-d18b-422c-b468-9692f5ce8618" path="/var/lib/kubelet/pods/b6d8cd14-d18b-422c-b468-9692f5ce8618/volumes"
	Jan 03 20:00:02 addons-845596 kubelet[1346]: I0103 20:00:02.836655    1346 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f0a36dce-292d-434c-a0d2-cd01ff26b921" path="/var/lib/kubelet/pods/f0a36dce-292d-434c-a0d2-cd01ff26b921/volumes"
	Jan 03 20:00:03 addons-845596 kubelet[1346]: I0103 20:00:03.759371    1346 scope.go:117] "RemoveContainer" containerID="3e43ddeca5a27f0b53e1973e63f03c4e3f05faa3d71554c2e212d39187d53830"
	Jan 03 20:00:03 addons-845596 kubelet[1346]: I0103 20:00:03.759581    1346 scope.go:117] "RemoveContainer" containerID="a58fb13d904c407fd9fd31a5498dbae3874c9afdf4fabfb611348214f1d95cb4"
	Jan 03 20:00:03 addons-845596 kubelet[1346]: E0103 20:00:03.759856    1346 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-dh66t_default(8aa21703-c1d4-49fb-a7c7-bc58a3a0397f)\"" pod="default/hello-world-app-5d77478584-dh66t" podUID="8aa21703-c1d4-49fb-a7c7-bc58a3a0397f"
	Jan 03 20:00:05 addons-845596 kubelet[1346]: I0103 20:00:05.256240    1346 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f4171327-ea32-47bb-8816-1e26ec07e30b-webhook-cert\") pod \"f4171327-ea32-47bb-8816-1e26ec07e30b\" (UID: \"f4171327-ea32-47bb-8816-1e26ec07e30b\") "
	Jan 03 20:00:05 addons-845596 kubelet[1346]: I0103 20:00:05.256307    1346 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr7nh\" (UniqueName: \"kubernetes.io/projected/f4171327-ea32-47bb-8816-1e26ec07e30b-kube-api-access-gr7nh\") pod \"f4171327-ea32-47bb-8816-1e26ec07e30b\" (UID: \"f4171327-ea32-47bb-8816-1e26ec07e30b\") "
	Jan 03 20:00:05 addons-845596 kubelet[1346]: I0103 20:00:05.259363    1346 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4171327-ea32-47bb-8816-1e26ec07e30b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f4171327-ea32-47bb-8816-1e26ec07e30b" (UID: "f4171327-ea32-47bb-8816-1e26ec07e30b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 03 20:00:05 addons-845596 kubelet[1346]: I0103 20:00:05.264378    1346 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4171327-ea32-47bb-8816-1e26ec07e30b-kube-api-access-gr7nh" (OuterVolumeSpecName: "kube-api-access-gr7nh") pod "f4171327-ea32-47bb-8816-1e26ec07e30b" (UID: "f4171327-ea32-47bb-8816-1e26ec07e30b"). InnerVolumeSpecName "kube-api-access-gr7nh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 03 20:00:05 addons-845596 kubelet[1346]: I0103 20:00:05.356639    1346 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f4171327-ea32-47bb-8816-1e26ec07e30b-webhook-cert\") on node \"addons-845596\" DevicePath \"\""
	Jan 03 20:00:05 addons-845596 kubelet[1346]: I0103 20:00:05.356683    1346 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gr7nh\" (UniqueName: \"kubernetes.io/projected/f4171327-ea32-47bb-8816-1e26ec07e30b-kube-api-access-gr7nh\") on node \"addons-845596\" DevicePath \"\""
	Jan 03 20:00:05 addons-845596 kubelet[1346]: I0103 20:00:05.767230    1346 scope.go:117] "RemoveContainer" containerID="c42b5554fe5e60aea41844d3c6db7c4e9e21b185df25e1e725c7b7fb59a74071"
	Jan 03 20:00:05 addons-845596 kubelet[1346]: I0103 20:00:05.784598    1346 scope.go:117] "RemoveContainer" containerID="c42b5554fe5e60aea41844d3c6db7c4e9e21b185df25e1e725c7b7fb59a74071"
	Jan 03 20:00:05 addons-845596 kubelet[1346]: E0103 20:00:05.785015    1346 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c42b5554fe5e60aea41844d3c6db7c4e9e21b185df25e1e725c7b7fb59a74071\": container with ID starting with c42b5554fe5e60aea41844d3c6db7c4e9e21b185df25e1e725c7b7fb59a74071 not found: ID does not exist" containerID="c42b5554fe5e60aea41844d3c6db7c4e9e21b185df25e1e725c7b7fb59a74071"
	Jan 03 20:00:05 addons-845596 kubelet[1346]: I0103 20:00:05.785060    1346 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c42b5554fe5e60aea41844d3c6db7c4e9e21b185df25e1e725c7b7fb59a74071"} err="failed to get container status \"c42b5554fe5e60aea41844d3c6db7c4e9e21b185df25e1e725c7b7fb59a74071\": rpc error: code = NotFound desc = could not find container \"c42b5554fe5e60aea41844d3c6db7c4e9e21b185df25e1e725c7b7fb59a74071\": container with ID starting with c42b5554fe5e60aea41844d3c6db7c4e9e21b185df25e1e725c7b7fb59a74071 not found: ID does not exist"
	Jan 03 20:00:06 addons-845596 kubelet[1346]: I0103 20:00:06.830575    1346 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f4171327-ea32-47bb-8816-1e26ec07e30b" path="/var/lib/kubelet/pods/f4171327-ea32-47bb-8816-1e26ec07e30b/volumes"
	
	
	==> storage-provisioner [3b71c690ac7ce63794ac7d361e576a4882ca28dacc0c0864b562849c1c18556b] <==
	I0103 19:54:45.717464       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0103 19:54:45.756265       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0103 19:54:45.756504       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0103 19:54:45.819290       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0103 19:54:45.820563       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-845596_3296661e-c424-44fa-b7ab-b2af8ae0d70e!
	I0103 19:54:45.825002       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"54b88160-4d8e-4971-8a86-8d1055f18fa8", APIVersion:"v1", ResourceVersion:"924", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-845596_3296661e-c424-44fa-b7ab-b2af8ae0d70e became leader
	I0103 19:54:45.940347       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-845596_3296661e-c424-44fa-b7ab-b2af8ae0d70e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-845596 -n addons-845596
helpers_test.go:261: (dbg) Run:  kubectl --context addons-845596 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (168.19s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (179.21s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-480050 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-480050 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (10.315965443s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-480050 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-480050 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c3baa6a0-271c-4510-bb07-5ad2ce7f7373] Pending
helpers_test.go:344: "nginx" [c3baa6a0-271c-4510-bb07-5ad2ce7f7373] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c3baa6a0-271c-4510-bb07-5ad2ce7f7373] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.00359381s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-480050 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0103 20:09:11.514646  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
E0103 20:09:11.520026  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
E0103 20:09:11.530345  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
E0103 20:09:11.550603  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
E0103 20:09:11.590919  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
E0103 20:09:11.671279  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
E0103 20:09:11.831705  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
E0103 20:09:12.152436  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
E0103 20:09:12.793405  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
E0103 20:09:14.073683  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
E0103 20:09:16.634625  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
E0103 20:09:21.754981  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
E0103 20:09:31.995339  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-480050 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.159819715s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-480050 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-480050 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.022040136s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-480050 addons disable ingress-dns --alsologtostderr -v=1
E0103 20:09:52.475547  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-480050 addons disable ingress-dns --alsologtostderr -v=1: (2.5379134s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-480050 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-480050 addons disable ingress --alsologtostderr -v=1: (7.564092754s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-480050
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-480050:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53c1e67d05052697d5c846e1b0cc50e3b0e3a9e8869f5be56c4db43714f7cd3c",
	        "Created": "2024-01-03T20:05:44.059152146Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 442526,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-03T20:05:44.39279196Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3bfff26a1ae256fcdf8f10a333efdefbe26edc5c1669e1cc5c973c016e44d3c4",
	        "ResolvConfPath": "/var/lib/docker/containers/53c1e67d05052697d5c846e1b0cc50e3b0e3a9e8869f5be56c4db43714f7cd3c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53c1e67d05052697d5c846e1b0cc50e3b0e3a9e8869f5be56c4db43714f7cd3c/hostname",
	        "HostsPath": "/var/lib/docker/containers/53c1e67d05052697d5c846e1b0cc50e3b0e3a9e8869f5be56c4db43714f7cd3c/hosts",
	        "LogPath": "/var/lib/docker/containers/53c1e67d05052697d5c846e1b0cc50e3b0e3a9e8869f5be56c4db43714f7cd3c/53c1e67d05052697d5c846e1b0cc50e3b0e3a9e8869f5be56c4db43714f7cd3c-json.log",
	        "Name": "/ingress-addon-legacy-480050",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-480050:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-480050",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/477f33b0cfec796027ce8ebdb55a307a0ed57fa263a941631adeea4e0b23dbbb-init/diff:/var/lib/docker/overlay2/0cefd74c13c0ff527608d5d1778b7a3893c62167f91a1554bd1fa9cb8110135e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/477f33b0cfec796027ce8ebdb55a307a0ed57fa263a941631adeea4e0b23dbbb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/477f33b0cfec796027ce8ebdb55a307a0ed57fa263a941631adeea4e0b23dbbb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/477f33b0cfec796027ce8ebdb55a307a0ed57fa263a941631adeea4e0b23dbbb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-480050",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-480050/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-480050",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-480050",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-480050",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "819fb2047d927745eda628b763be48f638ecc068984572169662eb36d8c42966",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/819fb2047d92",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-480050": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "53c1e67d0505",
	                        "ingress-addon-legacy-480050"
	                    ],
	                    "NetworkID": "06bba2b1847994feddab605d18e6007c4b7c359eb659517d6771d7bcbdf2839b",
	                    "EndpointID": "9116782d5de60b5a55622ebd8f412dba5172c17f34aca57b859ece94a10e9e91",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-480050 -n ingress-addon-legacy-480050
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-480050 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-480050 logs -n 25: (1.451685172s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-155561 image load --daemon                                  | functional-155561           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:05 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-155561               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-155561 image ls                                             | functional-155561           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:05 UTC |
	| image   | functional-155561 image load --daemon                                  | functional-155561           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:05 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-155561               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-155561 image ls                                             | functional-155561           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:05 UTC |
	| image   | functional-155561 image save                                           | functional-155561           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:05 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-155561               |                             |         |         |                     |                     |
	|         | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-155561 image rm                                             | functional-155561           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:05 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-155561               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-155561 image ls                                             | functional-155561           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:05 UTC |
	| image   | functional-155561 image load                                           | functional-155561           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:05 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-155561 image ls                                             | functional-155561           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:05 UTC |
	| image   | functional-155561 image save --daemon                                  | functional-155561           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:05 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-155561               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-155561                                                      | functional-155561           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:05 UTC |
	|         | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-155561                                                      | functional-155561           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:05 UTC |
	|         | image ls --format short                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh     | functional-155561 ssh pgrep                                            | functional-155561           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC |                     |
	|         | buildkitd                                                              |                             |         |         |                     |                     |
	| image   | functional-155561                                                      | functional-155561           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:05 UTC |
	|         | image ls --format json                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-155561                                                      | functional-155561           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:05 UTC |
	|         | image ls --format table                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-155561 image build -t                                       | functional-155561           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:05 UTC |
	|         | localhost/my-image:functional-155561                                   |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image   | functional-155561 image ls                                             | functional-155561           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:05 UTC |
	| delete  | -p functional-155561                                                   | functional-155561           | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:05 UTC |
	| start   | -p ingress-addon-legacy-480050                                         | ingress-addon-legacy-480050 | jenkins | v1.32.0 | 03 Jan 24 20:05 UTC | 03 Jan 24 20:06 UTC |
	|         | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-480050                                            | ingress-addon-legacy-480050 | jenkins | v1.32.0 | 03 Jan 24 20:06 UTC | 03 Jan 24 20:07 UTC |
	|         | addons enable ingress                                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-480050                                            | ingress-addon-legacy-480050 | jenkins | v1.32.0 | 03 Jan 24 20:07 UTC | 03 Jan 24 20:07 UTC |
	|         | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-480050                                            | ingress-addon-legacy-480050 | jenkins | v1.32.0 | 03 Jan 24 20:07 UTC |                     |
	|         | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-480050 ip                                         | ingress-addon-legacy-480050 | jenkins | v1.32.0 | 03 Jan 24 20:09 UTC | 03 Jan 24 20:09 UTC |
	| addons  | ingress-addon-legacy-480050                                            | ingress-addon-legacy-480050 | jenkins | v1.32.0 | 03 Jan 24 20:09 UTC | 03 Jan 24 20:09 UTC |
	|         | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-480050                                            | ingress-addon-legacy-480050 | jenkins | v1.32.0 | 03 Jan 24 20:09 UTC | 03 Jan 24 20:10 UTC |
	|         | addons disable ingress                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 20:05:22
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 20:05:22.170118  442056 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:05:22.170355  442056 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:05:22.170367  442056 out.go:309] Setting ErrFile to fd 2...
	I0103 20:05:22.170374  442056 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:05:22.170699  442056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-409390/.minikube/bin
	I0103 20:05:22.171161  442056 out.go:303] Setting JSON to false
	I0103 20:05:22.172090  442056 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":6472,"bootTime":1704305851,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0103 20:05:22.172181  442056 start.go:138] virtualization:  
	I0103 20:05:22.175174  442056 out.go:177] * [ingress-addon-legacy-480050] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0103 20:05:22.177921  442056 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 20:05:22.180052  442056 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 20:05:22.178090  442056 notify.go:220] Checking for updates...
	I0103 20:05:22.184236  442056 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 20:05:22.186139  442056 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube
	I0103 20:05:22.188145  442056 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0103 20:05:22.190067  442056 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 20:05:22.192273  442056 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 20:05:22.216439  442056 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 20:05:22.216544  442056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 20:05:22.306777  442056 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-03 20:05:22.296796831 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 20:05:22.306878  442056 docker.go:295] overlay module found
	I0103 20:05:22.310284  442056 out.go:177] * Using the docker driver based on user configuration
	I0103 20:05:22.312643  442056 start.go:298] selected driver: docker
	I0103 20:05:22.312661  442056 start.go:902] validating driver "docker" against <nil>
	I0103 20:05:22.312674  442056 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 20:05:22.313282  442056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 20:05:22.380630  442056 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-03 20:05:22.371214474 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 20:05:22.380784  442056 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0103 20:05:22.381037  442056 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0103 20:05:22.383370  442056 out.go:177] * Using Docker driver with root privileges
	I0103 20:05:22.385441  442056 cni.go:84] Creating CNI manager for ""
	I0103 20:05:22.385466  442056 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 20:05:22.385481  442056 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0103 20:05:22.385495  442056 start_flags.go:323] config:
	{Name:ingress-addon-legacy-480050 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-480050 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:05:22.388886  442056 out.go:177] * Starting control plane node ingress-addon-legacy-480050 in cluster ingress-addon-legacy-480050
	I0103 20:05:22.391017  442056 cache.go:121] Beginning downloading kic base image for docker with crio
	I0103 20:05:22.392920  442056 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0103 20:05:22.394963  442056 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0103 20:05:22.394992  442056 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 20:05:22.412251  442056 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0103 20:05:22.412285  442056 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0103 20:05:22.462454  442056 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0103 20:05:22.462480  442056 cache.go:56] Caching tarball of preloaded images
	I0103 20:05:22.462671  442056 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0103 20:05:22.465254  442056 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0103 20:05:22.467810  442056 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0103 20:05:22.583877  442056 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0103 20:05:36.117245  442056 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0103 20:05:36.117358  442056 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0103 20:05:37.312946  442056 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0103 20:05:37.313325  442056 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/config.json ...
	I0103 20:05:37.313359  442056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/config.json: {Name:mke06a4ffd64d39aca17e9ed3c4749aaa40bf767 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:05:37.313545  442056 cache.go:194] Successfully downloaded all kic artifacts
	I0103 20:05:37.313593  442056 start.go:365] acquiring machines lock for ingress-addon-legacy-480050: {Name:mk057b79fcf1df21f1cf14b95b2427e7329e59de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:05:37.313651  442056 start.go:369] acquired machines lock for "ingress-addon-legacy-480050" in 44.126µs
	I0103 20:05:37.313673  442056 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-480050 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-480050 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:05:37.313746  442056 start.go:125] createHost starting for "" (driver="docker")
	I0103 20:05:37.316083  442056 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0103 20:05:37.316313  442056 start.go:159] libmachine.API.Create for "ingress-addon-legacy-480050" (driver="docker")
	I0103 20:05:37.316335  442056 client.go:168] LocalClient.Create starting
	I0103 20:05:37.316392  442056 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem
	I0103 20:05:37.316430  442056 main.go:141] libmachine: Decoding PEM data...
	I0103 20:05:37.316450  442056 main.go:141] libmachine: Parsing certificate...
	I0103 20:05:37.316508  442056 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem
	I0103 20:05:37.316531  442056 main.go:141] libmachine: Decoding PEM data...
	I0103 20:05:37.316544  442056 main.go:141] libmachine: Parsing certificate...
	I0103 20:05:37.316898  442056 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-480050 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0103 20:05:37.333773  442056 cli_runner.go:211] docker network inspect ingress-addon-legacy-480050 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0103 20:05:37.333870  442056 network_create.go:281] running [docker network inspect ingress-addon-legacy-480050] to gather additional debugging logs...
	I0103 20:05:37.333892  442056 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-480050
	W0103 20:05:37.350331  442056 cli_runner.go:211] docker network inspect ingress-addon-legacy-480050 returned with exit code 1
	I0103 20:05:37.350363  442056 network_create.go:284] error running [docker network inspect ingress-addon-legacy-480050]: docker network inspect ingress-addon-legacy-480050: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-480050 not found
	I0103 20:05:37.350378  442056 network_create.go:286] output of [docker network inspect ingress-addon-legacy-480050]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-480050 not found
	
	** /stderr **
	I0103 20:05:37.350476  442056 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0103 20:05:37.367789  442056 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40020f6ba0}
	I0103 20:05:37.367830  442056 network_create.go:124] attempt to create docker network ingress-addon-legacy-480050 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0103 20:05:37.367887  442056 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-480050 ingress-addon-legacy-480050
	I0103 20:05:37.439374  442056 network_create.go:108] docker network ingress-addon-legacy-480050 192.168.49.0/24 created
	I0103 20:05:37.439429  442056 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-480050" container
	I0103 20:05:37.439510  442056 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0103 20:05:37.458679  442056 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-480050 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-480050 --label created_by.minikube.sigs.k8s.io=true
	I0103 20:05:37.477900  442056 oci.go:103] Successfully created a docker volume ingress-addon-legacy-480050
	I0103 20:05:37.478006  442056 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-480050-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-480050 --entrypoint /usr/bin/test -v ingress-addon-legacy-480050:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I0103 20:05:38.992202  442056 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-480050-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-480050 --entrypoint /usr/bin/test -v ingress-addon-legacy-480050:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib: (1.514145424s)
	I0103 20:05:38.992231  442056 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-480050
	I0103 20:05:38.992250  442056 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0103 20:05:38.992269  442056 kic.go:194] Starting extracting preloaded images to volume ...
	I0103 20:05:38.992368  442056 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-480050:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I0103 20:05:43.970205  442056 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-480050:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (4.977793982s)
	I0103 20:05:43.970238  442056 kic.go:203] duration metric: took 4.977966 seconds to extract preloaded images to volume
	W0103 20:05:43.970387  442056 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0103 20:05:43.970501  442056 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0103 20:05:44.042861  442056 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-480050 --name ingress-addon-legacy-480050 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-480050 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-480050 --network ingress-addon-legacy-480050 --ip 192.168.49.2 --volume ingress-addon-legacy-480050:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0103 20:05:44.401333  442056 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-480050 --format={{.State.Running}}
	I0103 20:05:44.422556  442056 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-480050 --format={{.State.Status}}
	I0103 20:05:44.446804  442056 cli_runner.go:164] Run: docker exec ingress-addon-legacy-480050 stat /var/lib/dpkg/alternatives/iptables
	I0103 20:05:44.532118  442056 oci.go:144] the created container "ingress-addon-legacy-480050" has a running status.
	I0103 20:05:44.532147  442056 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/ingress-addon-legacy-480050/id_rsa...
	I0103 20:05:45.194347  442056 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/ingress-addon-legacy-480050/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0103 20:05:45.194455  442056 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17885-409390/.minikube/machines/ingress-addon-legacy-480050/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0103 20:05:45.230464  442056 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-480050 --format={{.State.Status}}
	I0103 20:05:45.261760  442056 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0103 20:05:45.261785  442056 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-480050 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0103 20:05:45.380700  442056 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-480050 --format={{.State.Status}}
	I0103 20:05:45.405591  442056 machine.go:88] provisioning docker machine ...
	I0103 20:05:45.405625  442056 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-480050"
	I0103 20:05:45.405700  442056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-480050
	I0103 20:05:45.436309  442056 main.go:141] libmachine: Using SSH client type: native
	I0103 20:05:45.436756  442056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0103 20:05:45.436771  442056 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-480050 && echo "ingress-addon-legacy-480050" | sudo tee /etc/hostname
	I0103 20:05:45.609291  442056 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-480050
	
	I0103 20:05:45.609385  442056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-480050
	I0103 20:05:45.637056  442056 main.go:141] libmachine: Using SSH client type: native
	I0103 20:05:45.637456  442056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0103 20:05:45.637475  442056 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-480050' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-480050/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-480050' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:05:45.784843  442056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:05:45.784919  442056 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17885-409390/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-409390/.minikube}
	I0103 20:05:45.784999  442056 ubuntu.go:177] setting up certificates
	I0103 20:05:45.785028  442056 provision.go:83] configureAuth start
	I0103 20:05:45.785124  442056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-480050
	I0103 20:05:45.804242  442056 provision.go:138] copyHostCerts
	I0103 20:05:45.804281  442056 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem
	I0103 20:05:45.804313  442056 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem, removing ...
	I0103 20:05:45.804323  442056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem
	I0103 20:05:45.804397  442056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem (1078 bytes)
	I0103 20:05:45.804472  442056 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem
	I0103 20:05:45.804488  442056 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem, removing ...
	I0103 20:05:45.804492  442056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem
	I0103 20:05:45.804522  442056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem (1123 bytes)
	I0103 20:05:45.804559  442056 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem
	I0103 20:05:45.804574  442056 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem, removing ...
	I0103 20:05:45.804579  442056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem
	I0103 20:05:45.804613  442056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem (1679 bytes)
	I0103 20:05:45.804654  442056 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-480050 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-480050]
	I0103 20:05:46.402571  442056 provision.go:172] copyRemoteCerts
	I0103 20:05:46.402639  442056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:05:46.402697  442056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-480050
	I0103 20:05:46.423552  442056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/ingress-addon-legacy-480050/id_rsa Username:docker}
	I0103 20:05:46.525026  442056 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0103 20:05:46.525087  442056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:05:46.553823  442056 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0103 20:05:46.553897  442056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0103 20:05:46.582228  442056 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0103 20:05:46.582338  442056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 20:05:46.610587  442056 provision.go:86] duration metric: configureAuth took 825.512253ms
	I0103 20:05:46.610614  442056 ubuntu.go:193] setting minikube options for container-runtime
	I0103 20:05:46.610794  442056 config.go:182] Loaded profile config "ingress-addon-legacy-480050": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0103 20:05:46.610895  442056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-480050
	I0103 20:05:46.628681  442056 main.go:141] libmachine: Using SSH client type: native
	I0103 20:05:46.629105  442056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0103 20:05:46.629128  442056 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:05:46.903309  442056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:05:46.903334  442056 machine.go:91] provisioned docker machine in 1.497719225s
	I0103 20:05:46.903345  442056 client.go:171] LocalClient.Create took 9.587003658s
	I0103 20:05:46.903363  442056 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-480050" took 9.587050131s
	I0103 20:05:46.903372  442056 start.go:300] post-start starting for "ingress-addon-legacy-480050" (driver="docker")
	I0103 20:05:46.903383  442056 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:05:46.903467  442056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:05:46.903514  442056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-480050
	I0103 20:05:46.921487  442056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/ingress-addon-legacy-480050/id_rsa Username:docker}
	I0103 20:05:47.021931  442056 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:05:47.026172  442056 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0103 20:05:47.026208  442056 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0103 20:05:47.026240  442056 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0103 20:05:47.026252  442056 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0103 20:05:47.026263  442056 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-409390/.minikube/addons for local assets ...
	I0103 20:05:47.026327  442056 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-409390/.minikube/files for local assets ...
	I0103 20:05:47.026423  442056 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem -> 4147632.pem in /etc/ssl/certs
	I0103 20:05:47.026435  442056 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem -> /etc/ssl/certs/4147632.pem
	I0103 20:05:47.026602  442056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:05:47.037148  442056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem --> /etc/ssl/certs/4147632.pem (1708 bytes)
	I0103 20:05:47.065690  442056 start.go:303] post-start completed in 162.302349ms
	I0103 20:05:47.066079  442056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-480050
	I0103 20:05:47.087227  442056 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/config.json ...
	I0103 20:05:47.087587  442056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 20:05:47.087643  442056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-480050
	I0103 20:05:47.107072  442056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/ingress-addon-legacy-480050/id_rsa Username:docker}
	I0103 20:05:47.204836  442056 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0103 20:05:47.211407  442056 start.go:128] duration metric: createHost completed in 9.897644974s
	I0103 20:05:47.211435  442056 start.go:83] releasing machines lock for "ingress-addon-legacy-480050", held for 9.897772487s
	I0103 20:05:47.211526  442056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-480050
	I0103 20:05:47.234268  442056 ssh_runner.go:195] Run: cat /version.json
	I0103 20:05:47.234345  442056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-480050
	I0103 20:05:47.234624  442056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:05:47.234708  442056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-480050
	I0103 20:05:47.265779  442056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/ingress-addon-legacy-480050/id_rsa Username:docker}
	I0103 20:05:47.266689  442056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/ingress-addon-legacy-480050/id_rsa Username:docker}
	I0103 20:05:47.490099  442056 ssh_runner.go:195] Run: systemctl --version
	I0103 20:05:47.495501  442056 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:05:47.642666  442056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 20:05:47.648144  442056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:05:47.671152  442056 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0103 20:05:47.671300  442056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:05:47.708570  442056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0103 20:05:47.708593  442056 start.go:475] detecting cgroup driver to use...
	I0103 20:05:47.708625  442056 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 20:05:47.708678  442056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:05:47.726760  442056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:05:47.740027  442056 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:05:47.740148  442056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:05:47.755898  442056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:05:47.772737  442056 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:05:47.877613  442056 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:05:47.980555  442056 docker.go:219] disabling docker service ...
	I0103 20:05:47.980622  442056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:05:48.003313  442056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:05:48.022594  442056 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:05:48.119451  442056 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:05:48.223912  442056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:05:48.238499  442056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:05:48.262248  442056 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0103 20:05:48.262333  442056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:05:48.276337  442056 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:05:48.276434  442056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:05:48.288625  442056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:05:48.303224  442056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:05:48.316406  442056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:05:48.327666  442056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:05:48.338153  442056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:05:48.348744  442056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:05:48.445743  442056 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:05:48.581214  442056 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:05:48.581302  442056 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:05:48.586162  442056 start.go:543] Will wait 60s for crictl version
	I0103 20:05:48.586252  442056 ssh_runner.go:195] Run: which crictl
	I0103 20:05:48.590957  442056 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:05:48.636025  442056 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0103 20:05:48.636126  442056 ssh_runner.go:195] Run: crio --version
	I0103 20:05:48.680329  442056 ssh_runner.go:195] Run: crio --version
	I0103 20:05:48.732373  442056 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0103 20:05:48.733923  442056 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-480050 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0103 20:05:48.751616  442056 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0103 20:05:48.756248  442056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:05:48.770302  442056 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0103 20:05:48.770371  442056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:05:48.824445  442056 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0103 20:05:48.824537  442056 ssh_runner.go:195] Run: which lz4
	I0103 20:05:48.829033  442056 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0103 20:05:48.829134  442056 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0103 20:05:48.833935  442056 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0103 20:05:48.834002  442056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I0103 20:05:50.898141  442056 crio.go:444] Took 2.069044 seconds to copy over tarball
	I0103 20:05:50.898250  442056 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0103 20:05:53.573403  442056 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.675120389s)
	I0103 20:05:53.573435  442056 crio.go:451] Took 2.675268 seconds to extract the tarball
	I0103 20:05:53.573446  442056 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0103 20:05:53.675067  442056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:05:53.717110  442056 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0103 20:05:53.717137  442056 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0103 20:05:53.717195  442056 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:05:53.717389  442056 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0103 20:05:53.717468  442056 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0103 20:05:53.717535  442056 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0103 20:05:53.717625  442056 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0103 20:05:53.717705  442056 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0103 20:05:53.717781  442056 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0103 20:05:53.717851  442056 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0103 20:05:53.718883  442056 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0103 20:05:53.719271  442056 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0103 20:05:53.719506  442056 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0103 20:05:53.720282  442056 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0103 20:05:53.720342  442056 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0103 20:05:53.719580  442056 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:05:53.720579  442056 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0103 20:05:53.719731  442056 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	W0103 20:05:54.141199  442056 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0103 20:05:54.141384  442056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W0103 20:05:54.161769  442056 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0103 20:05:54.162011  442056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W0103 20:05:54.167818  442056 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0103 20:05:54.167987  442056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0103 20:05:54.192190  442056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W0103 20:05:54.214926  442056 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0103 20:05:54.215246  442056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0103 20:05:54.215438  442056 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0103 20:05:54.215490  442056 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0103 20:05:54.215547  442056 ssh_runner.go:195] Run: which crictl
	W0103 20:05:54.221368  442056 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0103 20:05:54.221599  442056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0103 20:05:54.278833  442056 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0103 20:05:54.279001  442056 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0103 20:05:54.278950  442056 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0103 20:05:54.279078  442056 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0103 20:05:54.279110  442056 ssh_runner.go:195] Run: which crictl
	I0103 20:05:54.279151  442056 ssh_runner.go:195] Run: which crictl
	W0103 20:05:54.291193  442056 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0103 20:05:54.291444  442056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0103 20:05:54.363134  442056 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0103 20:05:54.363437  442056 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0103 20:05:54.363493  442056 ssh_runner.go:195] Run: which crictl
	I0103 20:05:54.363234  442056 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0103 20:05:54.363617  442056 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0103 20:05:54.363665  442056 ssh_runner.go:195] Run: which crictl
	I0103 20:05:54.363339  442056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0103 20:05:54.363379  442056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0103 20:05:54.363380  442056 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0103 20:05:54.363885  442056 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0103 20:05:54.363405  442056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0103 20:05:54.363926  442056 ssh_runner.go:195] Run: which crictl
	I0103 20:05:54.410415  442056 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0103 20:05:54.410454  442056 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0103 20:05:54.410500  442056 ssh_runner.go:195] Run: which crictl
	W0103 20:05:54.426343  442056 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0103 20:05:54.426613  442056 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:05:54.489275  442056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0103 20:05:54.489361  442056 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0103 20:05:54.489466  442056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0103 20:05:54.489540  442056 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0103 20:05:54.489573  442056 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0103 20:05:54.489611  442056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0103 20:05:54.489640  442056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0103 20:05:54.644965  442056 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0103 20:05:54.645014  442056 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:05:54.645065  442056 ssh_runner.go:195] Run: which crictl
	I0103 20:05:54.645153  442056 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0103 20:05:54.645209  442056 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0103 20:05:54.645253  442056 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0103 20:05:54.645301  442056 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0103 20:05:54.649592  442056 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:05:54.713201  442056 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0103 20:05:54.713326  442056 cache_images.go:92] LoadImages completed in 996.174822ms
	W0103 20:05:54.713411  442056 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0103 20:05:54.713495  442056 ssh_runner.go:195] Run: crio config
	I0103 20:05:54.786844  442056 cni.go:84] Creating CNI manager for ""
	I0103 20:05:54.786865  442056 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 20:05:54.786903  442056 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 20:05:54.786924  442056 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-480050 NodeName:ingress-addon-legacy-480050 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0103 20:05:54.787076  442056 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-480050"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:05:54.787151  442056 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-480050 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-480050 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 20:05:54.787221  442056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0103 20:05:54.797725  442056 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:05:54.797796  442056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:05:54.808014  442056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0103 20:05:54.829148  442056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0103 20:05:54.850693  442056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0103 20:05:54.871674  442056 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0103 20:05:54.876019  442056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:05:54.889021  442056 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050 for IP: 192.168.49.2
	I0103 20:05:54.889053  442056 certs.go:190] acquiring lock for shared ca certs: {Name:mk7a87d13d39d2defe5d349d371b78fa1f1e95bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:05:54.889221  442056 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.key
	I0103 20:05:54.889270  442056 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.key
	I0103 20:05:54.889329  442056 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.key
	I0103 20:05:54.889344  442056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt with IP's: []
	I0103 20:05:55.049445  442056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt ...
	I0103 20:05:55.049483  442056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: {Name:mk9161a84e8114685bdab85314d045ac0bdd908b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:05:55.049709  442056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.key ...
	I0103 20:05:55.049725  442056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.key: {Name:mk9c147bfcd0a0acb42031867bca7e2329381179 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:05:55.049814  442056 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/apiserver.key.dd3b5fb2
	I0103 20:05:55.049827  442056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0103 20:05:55.269811  442056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/apiserver.crt.dd3b5fb2 ...
	I0103 20:05:55.269841  442056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/apiserver.crt.dd3b5fb2: {Name:mke8ed6bdf6ed15bc3a9b31d13250e9238aefc37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:05:55.270023  442056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/apiserver.key.dd3b5fb2 ...
	I0103 20:05:55.270042  442056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/apiserver.key.dd3b5fb2: {Name:mka36dd76bb8346c5047d8653382c71bfca54f20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:05:55.270129  442056 certs.go:337] copying /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/apiserver.crt
	I0103 20:05:55.270198  442056 certs.go:341] copying /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/apiserver.key
	I0103 20:05:55.270263  442056 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/proxy-client.key
	I0103 20:05:55.270279  442056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/proxy-client.crt with IP's: []
	I0103 20:05:56.422648  442056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/proxy-client.crt ...
	I0103 20:05:56.422680  442056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/proxy-client.crt: {Name:mkab9a3ec7d3e2330d0d13a994bb12f51f223deb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:05:56.422868  442056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/proxy-client.key ...
	I0103 20:05:56.422883  442056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/proxy-client.key: {Name:mk40b8638d732e24b6f01d819b2d478b386500cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:05:56.422962  442056 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0103 20:05:56.422984  442056 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0103 20:05:56.422996  442056 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0103 20:05:56.423023  442056 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0103 20:05:56.423037  442056 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0103 20:05:56.423051  442056 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0103 20:05:56.423066  442056 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0103 20:05:56.423084  442056 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0103 20:05:56.423158  442056 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/414763.pem (1338 bytes)
	W0103 20:05:56.423200  442056 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/414763_empty.pem, impossibly tiny 0 bytes
	I0103 20:05:56.423209  442056 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem (1679 bytes)
	I0103 20:05:56.423235  442056 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:05:56.423264  442056 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:05:56.423296  442056 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem (1679 bytes)
	I0103 20:05:56.423345  442056 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem (1708 bytes)
	I0103 20:05:56.423375  442056 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem -> /usr/share/ca-certificates/4147632.pem
	I0103 20:05:56.423392  442056 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:05:56.423410  442056 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/414763.pem -> /usr/share/ca-certificates/414763.pem
	I0103 20:05:56.423983  442056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:05:56.453850  442056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 20:05:56.482474  442056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:05:56.510891  442056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0103 20:05:56.540324  442056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:05:56.568880  442056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:05:56.598047  442056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:05:56.626498  442056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:05:56.654728  442056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem --> /usr/share/ca-certificates/4147632.pem (1708 bytes)
	I0103 20:05:56.683768  442056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:05:56.712810  442056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/certs/414763.pem --> /usr/share/ca-certificates/414763.pem (1338 bytes)
	I0103 20:05:56.741857  442056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:05:56.763131  442056 ssh_runner.go:195] Run: openssl version
	I0103 20:05:56.770152  442056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/414763.pem && ln -fs /usr/share/ca-certificates/414763.pem /etc/ssl/certs/414763.pem"
	I0103 20:05:56.781998  442056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/414763.pem
	I0103 20:05:56.787035  442056 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 20:01 /usr/share/ca-certificates/414763.pem
	I0103 20:05:56.787152  442056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/414763.pem
	I0103 20:05:56.795991  442056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/414763.pem /etc/ssl/certs/51391683.0"
	I0103 20:05:56.807698  442056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4147632.pem && ln -fs /usr/share/ca-certificates/4147632.pem /etc/ssl/certs/4147632.pem"
	I0103 20:05:56.819506  442056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4147632.pem
	I0103 20:05:56.824202  442056 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 20:01 /usr/share/ca-certificates/4147632.pem
	I0103 20:05:56.824268  442056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4147632.pem
	I0103 20:05:56.832947  442056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4147632.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:05:56.844910  442056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:05:56.856423  442056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:05:56.861131  442056 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:05:56.861204  442056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:05:56.869855  442056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:05:56.881202  442056 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:05:56.885687  442056 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 20:05:56.885740  442056 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-480050 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-480050 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:05:56.885820  442056 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:05:56.885880  442056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:05:56.926140  442056 cri.go:89] found id: ""
	I0103 20:05:56.926214  442056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:05:56.936860  442056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:05:56.947388  442056 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0103 20:05:56.947460  442056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:05:56.957973  442056 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:05:56.958036  442056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0103 20:05:57.013239  442056 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0103 20:05:57.013820  442056 kubeadm.go:322] [preflight] Running pre-flight checks
	I0103 20:05:57.068869  442056 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0103 20:05:57.068958  442056 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0103 20:05:57.069007  442056 kubeadm.go:322] OS: Linux
	I0103 20:05:57.069054  442056 kubeadm.go:322] CGROUPS_CPU: enabled
	I0103 20:05:57.069105  442056 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0103 20:05:57.069153  442056 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0103 20:05:57.069202  442056 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0103 20:05:57.069260  442056 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0103 20:05:57.069316  442056 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0103 20:05:57.167469  442056 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 20:05:57.167576  442056 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 20:05:57.167672  442056 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 20:05:57.396733  442056 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 20:05:57.398233  442056 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 20:05:57.398283  442056 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0103 20:05:57.504091  442056 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 20:05:57.506369  442056 out.go:204]   - Generating certificates and keys ...
	I0103 20:05:57.506582  442056 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0103 20:05:57.506685  442056 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0103 20:05:58.028712  442056 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0103 20:05:58.232363  442056 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0103 20:05:58.876557  442056 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0103 20:05:59.425195  442056 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0103 20:05:59.884460  442056 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0103 20:05:59.884822  442056 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-480050 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0103 20:06:00.810275  442056 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0103 20:06:00.810626  442056 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-480050 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0103 20:06:01.377724  442056 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0103 20:06:01.824866  442056 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0103 20:06:02.325612  442056 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0103 20:06:02.325910  442056 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 20:06:02.606016  442056 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 20:06:03.004308  442056 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 20:06:03.265032  442056 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 20:06:03.410382  442056 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 20:06:03.411131  442056 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 20:06:03.413145  442056 out.go:204]   - Booting up control plane ...
	I0103 20:06:03.413245  442056 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 20:06:03.419212  442056 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 20:06:03.426160  442056 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 20:06:03.426970  442056 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 20:06:03.443457  442056 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0103 20:06:16.447139  442056 kubeadm.go:322] [apiclient] All control plane components are healthy after 13.003226 seconds
	I0103 20:06:16.447263  442056 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0103 20:06:16.470587  442056 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0103 20:06:16.993290  442056 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0103 20:06:16.993437  442056 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-480050 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0103 20:06:17.502052  442056 kubeadm.go:322] [bootstrap-token] Using token: rbb5tm.bqqymkqe5w01xqlx
	I0103 20:06:17.504161  442056 out.go:204]   - Configuring RBAC rules ...
	I0103 20:06:17.504281  442056 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0103 20:06:17.508380  442056 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0103 20:06:17.516446  442056 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0103 20:06:17.520229  442056 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0103 20:06:17.522690  442056 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0103 20:06:17.525103  442056 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0103 20:06:17.535591  442056 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0103 20:06:17.839415  442056 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0103 20:06:17.957269  442056 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0103 20:06:17.958908  442056 kubeadm.go:322] 
	I0103 20:06:17.958980  442056 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0103 20:06:17.958986  442056 kubeadm.go:322] 
	I0103 20:06:17.959058  442056 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0103 20:06:17.959063  442056 kubeadm.go:322] 
	I0103 20:06:17.959087  442056 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0103 20:06:17.959143  442056 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0103 20:06:17.959191  442056 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0103 20:06:17.959196  442056 kubeadm.go:322] 
	I0103 20:06:17.959245  442056 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0103 20:06:17.959316  442056 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0103 20:06:17.959400  442056 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0103 20:06:17.959406  442056 kubeadm.go:322] 
	I0103 20:06:17.959485  442056 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0103 20:06:17.959562  442056 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0103 20:06:17.959566  442056 kubeadm.go:322] 
	I0103 20:06:17.959645  442056 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token rbb5tm.bqqymkqe5w01xqlx \
	I0103 20:06:17.959745  442056 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:497121a93982227783f08bfd7c063fc9f8a8d85d0f5f85a6107fb52aadafec60 \
	I0103 20:06:17.959769  442056 kubeadm.go:322]     --control-plane 
	I0103 20:06:17.959774  442056 kubeadm.go:322] 
	I0103 20:06:17.959853  442056 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0103 20:06:17.959858  442056 kubeadm.go:322] 
	I0103 20:06:17.959935  442056 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token rbb5tm.bqqymkqe5w01xqlx \
	I0103 20:06:17.960033  442056 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:497121a93982227783f08bfd7c063fc9f8a8d85d0f5f85a6107fb52aadafec60 
	I0103 20:06:17.963466  442056 kubeadm.go:322] W0103 20:05:57.012235    1233 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0103 20:06:17.963762  442056 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0103 20:06:17.963916  442056 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 20:06:17.964049  442056 kubeadm.go:322] W0103 20:06:03.418977    1233 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0103 20:06:17.964195  442056 kubeadm.go:322] W0103 20:06:03.425940    1233 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0103 20:06:17.964209  442056 cni.go:84] Creating CNI manager for ""
	I0103 20:06:17.964217  442056 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 20:06:17.966426  442056 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0103 20:06:17.968302  442056 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0103 20:06:17.974075  442056 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0103 20:06:17.974117  442056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0103 20:06:18.001437  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0103 20:06:18.448877  442056 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:06:18.448948  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:18.449017  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a minikube.k8s.io/name=ingress-addon-legacy-480050 minikube.k8s.io/updated_at=2024_01_03T20_06_18_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:18.601342  442056 ops.go:34] apiserver oom_adj: -16
	I0103 20:06:18.601426  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:19.101873  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:19.602214  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:20.101668  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:20.602532  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:21.102036  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:21.601589  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:22.102020  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:22.601675  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:23.101623  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:23.602382  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:24.102377  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:24.602166  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:25.102251  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:25.602328  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:26.101584  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:26.602417  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:27.102236  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:27.602010  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:28.102601  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:28.602476  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:29.101593  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:29.602461  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:30.102730  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:30.601672  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:31.101581  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:31.601856  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:32.102512  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:32.602506  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:33.101792  442056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:06:33.209916  442056 kubeadm.go:1088] duration metric: took 14.761028842s to wait for elevateKubeSystemPrivileges.
	I0103 20:06:33.209942  442056 kubeadm.go:406] StartCluster complete in 36.324205315s
	I0103 20:06:33.209959  442056 settings.go:142] acquiring lock: {Name:mk35e0b2d8071191a72193c66ba9549131012420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:06:33.210019  442056 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 20:06:33.210774  442056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/kubeconfig: {Name:mkcf9b222e1b36afc1c2e4e412234b0c105c9bc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:06:33.211499  442056 kapi.go:59] client config for ingress-addon-legacy-480050: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.key", CAFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bfa00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 20:06:33.212758  442056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 20:06:33.213016  442056 cert_rotation.go:137] Starting client certificate rotation controller
	I0103 20:06:33.213221  442056 config.go:182] Loaded profile config "ingress-addon-legacy-480050": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0103 20:06:33.212971  442056 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:06:33.213340  442056 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-480050"
	I0103 20:06:33.213372  442056 addons.go:237] Setting addon storage-provisioner=true in "ingress-addon-legacy-480050"
	I0103 20:06:33.213414  442056 host.go:66] Checking if "ingress-addon-legacy-480050" exists ...
	I0103 20:06:33.213869  442056 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-480050 --format={{.State.Status}}
	I0103 20:06:33.214005  442056 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-480050"
	I0103 20:06:33.214020  442056 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-480050"
	I0103 20:06:33.214262  442056 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-480050 --format={{.State.Status}}
	I0103 20:06:33.267471  442056 kapi.go:59] client config for ingress-addon-legacy-480050: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.key", CAFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bfa00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 20:06:33.267800  442056 addons.go:237] Setting addon default-storageclass=true in "ingress-addon-legacy-480050"
	I0103 20:06:33.267854  442056 host.go:66] Checking if "ingress-addon-legacy-480050" exists ...
	I0103 20:06:33.268379  442056 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-480050 --format={{.State.Status}}
	I0103 20:06:33.270877  442056 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:06:33.273521  442056 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:06:33.273541  442056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 20:06:33.273614  442056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-480050
	I0103 20:06:33.303107  442056 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 20:06:33.303129  442056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 20:06:33.303198  442056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-480050
	I0103 20:06:33.333472  442056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/ingress-addon-legacy-480050/id_rsa Username:docker}
	I0103 20:06:33.338508  442056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/ingress-addon-legacy-480050/id_rsa Username:docker}
	I0103 20:06:33.463509  442056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0103 20:06:33.533875  442056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:06:33.569261  442056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 20:06:33.791940  442056 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-480050" context rescaled to 1 replicas
	I0103 20:06:33.791988  442056 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:06:33.793977  442056 out.go:177] * Verifying Kubernetes components...
	I0103 20:06:33.796893  442056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:06:33.961611  442056 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0103 20:06:34.046967  442056 kapi.go:59] client config for ingress-addon-legacy-480050: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.key", CAFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bfa00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 20:06:34.047341  442056 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-480050" to be "Ready" ...
	I0103 20:06:34.072202  442056 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0103 20:06:34.073828  442056 addons.go:508] enable addons completed in 860.870135ms: enabled=[storage-provisioner default-storageclass]
	I0103 20:06:36.051928  442056 node_ready.go:58] node "ingress-addon-legacy-480050" has status "Ready":"False"
	I0103 20:06:38.551132  442056 node_ready.go:58] node "ingress-addon-legacy-480050" has status "Ready":"False"
	I0103 20:06:41.050673  442056 node_ready.go:58] node "ingress-addon-legacy-480050" has status "Ready":"False"
	I0103 20:06:41.551150  442056 node_ready.go:49] node "ingress-addon-legacy-480050" has status "Ready":"True"
	I0103 20:06:41.551176  442056 node_ready.go:38] duration metric: took 7.503793355s waiting for node "ingress-addon-legacy-480050" to be "Ready" ...
	I0103 20:06:41.551186  442056 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:06:41.558340  442056 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-fmzzz" in "kube-system" namespace to be "Ready" ...
	I0103 20:06:43.562075  442056 pod_ready.go:102] pod "coredns-66bff467f8-fmzzz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-03 20:06:33 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0103 20:06:45.564833  442056 pod_ready.go:102] pod "coredns-66bff467f8-fmzzz" in "kube-system" namespace has status "Ready":"False"
	I0103 20:06:47.565058  442056 pod_ready.go:102] pod "coredns-66bff467f8-fmzzz" in "kube-system" namespace has status "Ready":"False"
	I0103 20:06:50.065394  442056 pod_ready.go:92] pod "coredns-66bff467f8-fmzzz" in "kube-system" namespace has status "Ready":"True"
	I0103 20:06:50.065435  442056 pod_ready.go:81] duration metric: took 8.507047753s waiting for pod "coredns-66bff467f8-fmzzz" in "kube-system" namespace to be "Ready" ...
	I0103 20:06:50.065448  442056 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-480050" in "kube-system" namespace to be "Ready" ...
	I0103 20:06:50.071510  442056 pod_ready.go:92] pod "etcd-ingress-addon-legacy-480050" in "kube-system" namespace has status "Ready":"True"
	I0103 20:06:50.071538  442056 pod_ready.go:81] duration metric: took 6.082339ms waiting for pod "etcd-ingress-addon-legacy-480050" in "kube-system" namespace to be "Ready" ...
	I0103 20:06:50.071554  442056 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-480050" in "kube-system" namespace to be "Ready" ...
	I0103 20:06:50.077106  442056 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-480050" in "kube-system" namespace has status "Ready":"True"
	I0103 20:06:50.077133  442056 pod_ready.go:81] duration metric: took 5.571603ms waiting for pod "kube-apiserver-ingress-addon-legacy-480050" in "kube-system" namespace to be "Ready" ...
	I0103 20:06:50.077145  442056 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-480050" in "kube-system" namespace to be "Ready" ...
	I0103 20:06:50.083016  442056 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-480050" in "kube-system" namespace has status "Ready":"True"
	I0103 20:06:50.083048  442056 pod_ready.go:81] duration metric: took 5.893748ms waiting for pod "kube-controller-manager-ingress-addon-legacy-480050" in "kube-system" namespace to be "Ready" ...
	I0103 20:06:50.083061  442056 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qqp4b" in "kube-system" namespace to be "Ready" ...
	I0103 20:06:50.088732  442056 pod_ready.go:92] pod "kube-proxy-qqp4b" in "kube-system" namespace has status "Ready":"True"
	I0103 20:06:50.088823  442056 pod_ready.go:81] duration metric: took 5.751694ms waiting for pod "kube-proxy-qqp4b" in "kube-system" namespace to be "Ready" ...
	I0103 20:06:50.088860  442056 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-480050" in "kube-system" namespace to be "Ready" ...
	I0103 20:06:50.260301  442056 request.go:629] Waited for 171.31734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-480050
	I0103 20:06:50.460388  442056 request.go:629] Waited for 197.361482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-480050
	I0103 20:06:50.463112  442056 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-480050" in "kube-system" namespace has status "Ready":"True"
	I0103 20:06:50.463139  442056 pod_ready.go:81] duration metric: took 374.262575ms waiting for pod "kube-scheduler-ingress-addon-legacy-480050" in "kube-system" namespace to be "Ready" ...
	I0103 20:06:50.463152  442056 pod_ready.go:38] duration metric: took 8.911950375s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:06:50.463166  442056 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:06:50.463232  442056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:06:50.477030  442056 api_server.go:72] duration metric: took 16.685008069s to wait for apiserver process to appear ...
	I0103 20:06:50.477056  442056 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:06:50.477077  442056 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0103 20:06:50.485951  442056 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0103 20:06:50.487034  442056 api_server.go:141] control plane version: v1.18.20
	I0103 20:06:50.487062  442056 api_server.go:131] duration metric: took 9.998485ms to wait for apiserver health ...
	I0103 20:06:50.487072  442056 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:06:50.660483  442056 request.go:629] Waited for 173.331403ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0103 20:06:50.666721  442056 system_pods.go:59] 8 kube-system pods found
	I0103 20:06:50.666752  442056 system_pods.go:61] "coredns-66bff467f8-fmzzz" [872a4bcd-63a7-4ff3-949f-240c05c9a0bf] Running
	I0103 20:06:50.666761  442056 system_pods.go:61] "etcd-ingress-addon-legacy-480050" [0cb04739-633e-4b15-900a-3dc3753c9b99] Running
	I0103 20:06:50.666766  442056 system_pods.go:61] "kindnet-n8kbx" [ebbecde1-0660-4871-a63b-83fef243076b] Running
	I0103 20:06:50.666773  442056 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-480050" [5879d166-c0e0-4537-b92d-1f7fb61a1ea6] Running
	I0103 20:06:50.666782  442056 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-480050" [e1ced655-8b79-4a17-8b6c-cd2a99fef1a0] Running
	I0103 20:06:50.666793  442056 system_pods.go:61] "kube-proxy-qqp4b" [4f3ea8c7-56b9-42b2-9911-22306ac5085c] Running
	I0103 20:06:50.666798  442056 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-480050" [f1bd6691-4a7d-4538-9248-73f57ba543b6] Running
	I0103 20:06:50.666805  442056 system_pods.go:61] "storage-provisioner" [94088714-5334-429f-b154-fcbfe4469c13] Running
	I0103 20:06:50.666811  442056 system_pods.go:74] duration metric: took 179.734148ms to wait for pod list to return data ...
	I0103 20:06:50.666824  442056 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:06:50.860296  442056 request.go:629] Waited for 193.367839ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0103 20:06:50.862804  442056 default_sa.go:45] found service account: "default"
	I0103 20:06:50.862836  442056 default_sa.go:55] duration metric: took 196.0055ms for default service account to be created ...
	I0103 20:06:50.862847  442056 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:06:51.060269  442056 request.go:629] Waited for 197.33462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0103 20:06:51.066290  442056 system_pods.go:86] 8 kube-system pods found
	I0103 20:06:51.066324  442056 system_pods.go:89] "coredns-66bff467f8-fmzzz" [872a4bcd-63a7-4ff3-949f-240c05c9a0bf] Running
	I0103 20:06:51.066333  442056 system_pods.go:89] "etcd-ingress-addon-legacy-480050" [0cb04739-633e-4b15-900a-3dc3753c9b99] Running
	I0103 20:06:51.066339  442056 system_pods.go:89] "kindnet-n8kbx" [ebbecde1-0660-4871-a63b-83fef243076b] Running
	I0103 20:06:51.066393  442056 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-480050" [5879d166-c0e0-4537-b92d-1f7fb61a1ea6] Running
	I0103 20:06:51.066407  442056 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-480050" [e1ced655-8b79-4a17-8b6c-cd2a99fef1a0] Running
	I0103 20:06:51.066413  442056 system_pods.go:89] "kube-proxy-qqp4b" [4f3ea8c7-56b9-42b2-9911-22306ac5085c] Running
	I0103 20:06:51.066418  442056 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-480050" [f1bd6691-4a7d-4538-9248-73f57ba543b6] Running
	I0103 20:06:51.066423  442056 system_pods.go:89] "storage-provisioner" [94088714-5334-429f-b154-fcbfe4469c13] Running
	I0103 20:06:51.066447  442056 system_pods.go:126] duration metric: took 203.592226ms to wait for k8s-apps to be running ...
	I0103 20:06:51.066463  442056 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:06:51.066549  442056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:06:51.082068  442056 system_svc.go:56] duration metric: took 15.594629ms WaitForService to wait for kubelet.
	I0103 20:06:51.082098  442056 kubeadm.go:581] duration metric: took 17.290082362s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:06:51.082119  442056 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:06:51.260557  442056 request.go:629] Waited for 178.318297ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0103 20:06:51.263503  442056 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0103 20:06:51.263538  442056 node_conditions.go:123] node cpu capacity is 2
	I0103 20:06:51.263549  442056 node_conditions.go:105] duration metric: took 181.425028ms to run NodePressure ...
	I0103 20:06:51.263582  442056 start.go:228] waiting for startup goroutines ...
	I0103 20:06:51.263596  442056 start.go:233] waiting for cluster config update ...
	I0103 20:06:51.263607  442056 start.go:242] writing updated cluster config ...
	I0103 20:06:51.263910  442056 ssh_runner.go:195] Run: rm -f paused
	I0103 20:06:51.325843  442056 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I0103 20:06:51.328703  442056 out.go:177] 
	W0103 20:06:51.330971  442056 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0103 20:06:51.333504  442056 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0103 20:06:51.335610  442056 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-480050" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 03 20:09:53 ingress-addon-legacy-480050 conmon[3602]: conmon 3ecbca09b4cf078d9edd <ninfo>: container 3613 exited with status 1
	Jan 03 20:09:53 ingress-addon-legacy-480050 crio[898]: time="2024-01-03 20:09:53.394074418Z" level=info msg="Started container" PID=3613 containerID=3ecbca09b4cf078d9edd65ae7d8e9ba1c6de5491c1443ca59a6345f5e2520630 description=default/hello-world-app-5f5d8b66bb-mjkfr/hello-world-app id=b8174dba-8b91-46d5-8602-a9e6bf48d67f name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=53bce59b7ce0c931b1c333c8d8789ab92fdfbf22015ad818f334929756f76c86
	Jan 03 20:09:53 ingress-addon-legacy-480050 crio[898]: time="2024-01-03 20:09:53.731164340Z" level=info msg="Removing container: 9ebf6df31d88265ad9342c366d1d7f36b353e7658f9064b0ebae1f7d301564a6" id=3fa79902-ef7f-4e9c-9526-2ddeb0b9fc20 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jan 03 20:09:53 ingress-addon-legacy-480050 crio[898]: time="2024-01-03 20:09:53.753347244Z" level=info msg="Removed container 9ebf6df31d88265ad9342c366d1d7f36b353e7658f9064b0ebae1f7d301564a6: default/hello-world-app-5f5d8b66bb-mjkfr/hello-world-app" id=3fa79902-ef7f-4e9c-9526-2ddeb0b9fc20 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jan 03 20:09:53 ingress-addon-legacy-480050 crio[898]: time="2024-01-03 20:09:53.769509747Z" level=info msg="Stopping pod sandbox: 07a75ff5711d51e0db2ea06700415f246e284c1de3ce420a9bb77517ef1e9bdf" id=7eac3b0d-026e-4c16-a96a-6cc22c050169 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 03 20:09:53 ingress-addon-legacy-480050 crio[898]: time="2024-01-03 20:09:53.769556901Z" level=info msg="Stopped pod sandbox (already stopped): 07a75ff5711d51e0db2ea06700415f246e284c1de3ce420a9bb77517ef1e9bdf" id=7eac3b0d-026e-4c16-a96a-6cc22c050169 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 03 20:09:54 ingress-addon-legacy-480050 crio[898]: time="2024-01-03 20:09:54.694274338Z" level=info msg="Stopping container: 09fac0ef0c95120bd0c40a6dfbc9df8bd84ed68f3bfc0f56a3af8a7de084e60a (timeout: 2s)" id=bb841cc2-4ffa-4554-8ae6-70477586016d name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 03 20:09:54 ingress-addon-legacy-480050 crio[898]: time="2024-01-03 20:09:54.702879132Z" level=info msg="Stopping container: 09fac0ef0c95120bd0c40a6dfbc9df8bd84ed68f3bfc0f56a3af8a7de084e60a (timeout: 2s)" id=682553b5-d927-46c5-95da-72cbbd1b5200 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 03 20:09:55 ingress-addon-legacy-480050 crio[898]: time="2024-01-03 20:09:55.301759556Z" level=info msg="Stopping pod sandbox: 07a75ff5711d51e0db2ea06700415f246e284c1de3ce420a9bb77517ef1e9bdf" id=f9b4c41b-2142-4777-afa1-3ffcd7a1af06 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 03 20:09:55 ingress-addon-legacy-480050 crio[898]: time="2024-01-03 20:09:55.301804766Z" level=info msg="Stopped pod sandbox (already stopped): 07a75ff5711d51e0db2ea06700415f246e284c1de3ce420a9bb77517ef1e9bdf" id=f9b4c41b-2142-4777-afa1-3ffcd7a1af06 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 03 20:09:56 ingress-addon-legacy-480050 crio[898]: time="2024-01-03 20:09:56.710159378Z" level=warning msg="Stopping container 09fac0ef0c95120bd0c40a6dfbc9df8bd84ed68f3bfc0f56a3af8a7de084e60a with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=bb841cc2-4ffa-4554-8ae6-70477586016d name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 03 20:09:56 ingress-addon-legacy-480050 conmon[2692]: conmon 09fac0ef0c95120bd0c4 <ninfo>: container 2703 exited with status 137
	Jan 03 20:09:56 ingress-addon-legacy-480050 crio[898]: time="2024-01-03 20:09:56.873040990Z" level=info msg="Stopped container 09fac0ef0c95120bd0c40a6dfbc9df8bd84ed68f3bfc0f56a3af8a7de084e60a: ingress-nginx/ingress-nginx-controller-7fcf777cb7-9m2z8/controller" id=682553b5-d927-46c5-95da-72cbbd1b5200 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 03 20:09:56 ingress-addon-legacy-480050 crio[898]: time="2024-01-03 20:09:56.873646833Z" level=info msg="Stopped container 09fac0ef0c95120bd0c40a6dfbc9df8bd84ed68f3bfc0f56a3af8a7de084e60a: ingress-nginx/ingress-nginx-controller-7fcf777cb7-9m2z8/controller" id=bb841cc2-4ffa-4554-8ae6-70477586016d name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 03 20:09:56 ingress-addon-legacy-480050 crio[898]: time="2024-01-03 20:09:56.874155511Z" level=info msg="Stopping pod sandbox: 65c6f57b6ba46f6c42e77e80b23ff53aaec731acfdbcfc4d00d9228ab72a4eab" id=685e0ce2-6749-4f3b-990f-d62ea0db18bf name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 03 20:09:56 ingress-addon-legacy-480050 crio[898]: time="2024-01-03 20:09:56.874401957Z" level=info msg="Stopping pod sandbox: 65c6f57b6ba46f6c42e77e80b23ff53aaec731acfdbcfc4d00d9228ab72a4eab" id=3eea5f59-a173-4486-a919-2b70a561aadf name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 03 20:09:56 ingress-addon-legacy-480050 crio[898]: time="2024-01-03 20:09:56.877831516Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-RAJNYPGWEMKIAOVI - [0:0]\n:KUBE-HP-MXJ6G46L7FFDIVLE - [0:0]\n-X KUBE-HP-MXJ6G46L7FFDIVLE\n-X KUBE-HP-RAJNYPGWEMKIAOVI\nCOMMIT\n"
	Jan 03 20:09:56 ingress-addon-legacy-480050 crio[898]: time="2024-01-03 20:09:56.879445756Z" level=info msg="Closing host port tcp:80"
	Jan 03 20:09:56 ingress-addon-legacy-480050 crio[898]: time="2024-01-03 20:09:56.879497226Z" level=info msg="Closing host port tcp:443"
	Jan 03 20:09:56 ingress-addon-legacy-480050 crio[898]: time="2024-01-03 20:09:56.880690605Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jan 03 20:09:56 ingress-addon-legacy-480050 crio[898]: time="2024-01-03 20:09:56.880714129Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jan 03 20:09:56 ingress-addon-legacy-480050 crio[898]: time="2024-01-03 20:09:56.880853589Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-9m2z8 Namespace:ingress-nginx ID:65c6f57b6ba46f6c42e77e80b23ff53aaec731acfdbcfc4d00d9228ab72a4eab UID:5eb9834c-c1d1-4fe0-a1a8-c2828c61d076 NetNS:/var/run/netns/30706313-b180-478c-b9a9-12c208643f7f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 03 20:09:56 ingress-addon-legacy-480050 crio[898]: time="2024-01-03 20:09:56.881007622Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-9m2z8 from CNI network \"kindnet\" (type=ptp)"
	Jan 03 20:09:56 ingress-addon-legacy-480050 crio[898]: time="2024-01-03 20:09:56.912181933Z" level=info msg="Stopped pod sandbox: 65c6f57b6ba46f6c42e77e80b23ff53aaec731acfdbcfc4d00d9228ab72a4eab" id=685e0ce2-6749-4f3b-990f-d62ea0db18bf name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 03 20:09:56 ingress-addon-legacy-480050 crio[898]: time="2024-01-03 20:09:56.912306247Z" level=info msg="Stopped pod sandbox (already stopped): 65c6f57b6ba46f6c42e77e80b23ff53aaec731acfdbcfc4d00d9228ab72a4eab" id=3eea5f59-a173-4486-a919-2b70a561aadf name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3ecbca09b4cf0       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                   9 seconds ago       Exited              hello-world-app           2                   53bce59b7ce0c       hello-world-app-5f5d8b66bb-mjkfr
	7237e61f96e77       docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb                    2 minutes ago       Running             nginx                     0                   3ce6ee3fba8c8       nginx
	09fac0ef0c951       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   65c6f57b6ba46       ingress-nginx-controller-7fcf777cb7-9m2z8
	26d47d5957a9c       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              patch                     0                   d38558428dcd5       ingress-nginx-admission-patch-28l6d
	3b1d0036dbd79       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   165d858d612d1       ingress-nginx-admission-create-p5tgk
	1b09b0c7a0941       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   f59f024cff243       storage-provisioner
	0aae1965e7c17       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   17250acb428aa       coredns-66bff467f8-fmzzz
	8be5f8a4a8a38       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   43cfff3e88925       kindnet-n8kbx
	ba16032c0e5f0       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   7ee5a265d6582       kube-proxy-qqp4b
	360c6c7378ddd       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   3 minutes ago       Running             etcd                      0                   62fd831f38320       etcd-ingress-addon-legacy-480050
	7e15e878cfdde       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   3 minutes ago       Running             kube-apiserver            0                   00329b649ba4a       kube-apiserver-ingress-addon-legacy-480050
	3723d342434fe       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   3 minutes ago       Running             kube-scheduler            0                   78270e859028f       kube-scheduler-ingress-addon-legacy-480050
	ee702e661d9f8       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   3 minutes ago       Running             kube-controller-manager   0                   5b508412b7173       kube-controller-manager-ingress-addon-legacy-480050
	
	
	==> coredns [0aae1965e7c17f96865320c4667a7cb3142a56fccb2ba3409e9a917d34d8ca7f] <==
	[INFO] 10.244.0.5:48215 - 16911 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003771s
	[INFO] 10.244.0.5:48215 - 27973 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002224873s
	[INFO] 10.244.0.5:34812 - 50956 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003132061s
	[INFO] 10.244.0.5:48215 - 16069 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001612696s
	[INFO] 10.244.0.5:34812 - 63159 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001558814s
	[INFO] 10.244.0.5:34812 - 2878 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000107691s
	[INFO] 10.244.0.5:48215 - 29194 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000235944s
	[INFO] 10.244.0.5:50140 - 35989 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000111605s
	[INFO] 10.244.0.5:49282 - 6253 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000043158s
	[INFO] 10.244.0.5:49282 - 46434 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000037965s
	[INFO] 10.244.0.5:50140 - 21361 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000032714s
	[INFO] 10.244.0.5:49282 - 41965 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037177s
	[INFO] 10.244.0.5:50140 - 39802 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000030621s
	[INFO] 10.244.0.5:50140 - 59710 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045611s
	[INFO] 10.244.0.5:49282 - 60770 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000030301s
	[INFO] 10.244.0.5:50140 - 11952 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000055909s
	[INFO] 10.244.0.5:49282 - 3044 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000099248s
	[INFO] 10.244.0.5:49282 - 57421 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038589s
	[INFO] 10.244.0.5:50140 - 3334 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000026322s
	[INFO] 10.244.0.5:50140 - 17264 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000856694s
	[INFO] 10.244.0.5:49282 - 59771 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001354813s
	[INFO] 10.244.0.5:50140 - 48559 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001065291s
	[INFO] 10.244.0.5:49282 - 9995 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000711982s
	[INFO] 10.244.0.5:50140 - 33640 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00004155s
	[INFO] 10.244.0.5:49282 - 33781 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000081s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-480050
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-480050
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=ingress-addon-legacy-480050
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_03T20_06_18_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 20:06:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-480050
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jan 2024 20:10:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 20:09:51 +0000   Wed, 03 Jan 2024 20:06:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 20:09:51 +0000   Wed, 03 Jan 2024 20:06:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 20:09:51 +0000   Wed, 03 Jan 2024 20:06:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 20:09:51 +0000   Wed, 03 Jan 2024 20:06:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-480050
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 7ae80536699f448a9220620606057f45
	  System UUID:                87b6d6ea-525b-446f-85cc-d1d00921e48b
	  Boot ID:                    75f8dc93-969c-4083-a399-3fa01ac68612
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-mjkfr                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 coredns-66bff467f8-fmzzz                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m29s
	  kube-system                 etcd-ingress-addon-legacy-480050                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 kindnet-n8kbx                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m30s
	  kube-system                 kube-apiserver-ingress-addon-legacy-480050             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-480050    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 kube-proxy-qqp4b                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 kube-scheduler-ingress-addon-legacy-480050             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m56s (x5 over 3m56s)  kubelet     Node ingress-addon-legacy-480050 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m56s (x5 over 3m56s)  kubelet     Node ingress-addon-legacy-480050 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m56s (x4 over 3m56s)  kubelet     Node ingress-addon-legacy-480050 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m41s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m41s                  kubelet     Node ingress-addon-legacy-480050 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m41s                  kubelet     Node ingress-addon-legacy-480050 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m41s                  kubelet     Node ingress-addon-legacy-480050 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m28s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m21s                  kubelet     Node ingress-addon-legacy-480050 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001189] FS-Cache: O-key=[8] 'ccd1c90000000000'
	[  +0.000818] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.001059] FS-Cache: N-cookie d=000000008497ac2d{9p.inode} n=00000000a750ea4f
	[  +0.001301] FS-Cache: N-key=[8] 'ccd1c90000000000'
	[  +0.014646] FS-Cache: Duplicate cookie detected
	[  +0.000925] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001115] FS-Cache: O-cookie d=000000008497ac2d{9p.inode} n=00000000f7d3da5e
	[  +0.001218] FS-Cache: O-key=[8] 'ccd1c90000000000'
	[  +0.000824] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.001156] FS-Cache: N-cookie d=000000008497ac2d{9p.inode} n=00000000bc524ce4
	[  +0.001241] FS-Cache: N-key=[8] 'ccd1c90000000000'
	[  +2.760106] FS-Cache: Duplicate cookie detected
	[  +0.000784] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001116] FS-Cache: O-cookie d=000000008497ac2d{9p.inode} n=00000000ca9fc0f7
	[  +0.001225] FS-Cache: O-key=[8] 'cbd1c90000000000'
	[  +0.000783] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001000] FS-Cache: N-cookie d=000000008497ac2d{9p.inode} n=000000003725d1cd
	[  +0.001192] FS-Cache: N-key=[8] 'cbd1c90000000000'
	[  +0.402621] FS-Cache: Duplicate cookie detected
	[  +0.000828] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001155] FS-Cache: O-cookie d=000000008497ac2d{9p.inode} n=00000000458cff56
	[  +0.001202] FS-Cache: O-key=[8] 'd1d1c90000000000'
	[  +0.000836] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001046] FS-Cache: N-cookie d=000000008497ac2d{9p.inode} n=00000000263e5b2a
	[  +0.001184] FS-Cache: N-key=[8] 'd1d1c90000000000'
	
	
	==> etcd [360c6c7378ddd99efe541b4689fa13b36d912ebdc4723314977c50deabf9ff5d] <==
	raft2024/01/03 20:06:09 INFO: aec36adc501070cc became follower at term 0
	raft2024/01/03 20:06:09 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2024/01/03 20:06:09 INFO: aec36adc501070cc became follower at term 1
	raft2024/01/03 20:06:09 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-03 20:06:09.632756 W | auth: simple token is not cryptographically signed
	2024-01-03 20:06:09.635755 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-01-03 20:06:09.638947 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/01/03 20:06:09 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-03 20:06:09.639775 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2024-01-03 20:06:09.641258 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-03 20:06:09.641496 I | embed: listening for peers on 192.168.49.2:2380
	2024-01-03 20:06:09.641743 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/01/03 20:06:10 INFO: aec36adc501070cc is starting a new election at term 1
	raft2024/01/03 20:06:10 INFO: aec36adc501070cc became candidate at term 2
	raft2024/01/03 20:06:10 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2024/01/03 20:06:10 INFO: aec36adc501070cc became leader at term 2
	raft2024/01/03 20:06:10 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2024-01-03 20:06:10.323520 I | etcdserver: published {Name:ingress-addon-legacy-480050 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2024-01-03 20:06:10.323580 I | embed: ready to serve client requests
	2024-01-03 20:06:10.324953 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-03 20:06:10.325147 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-03 20:06:10.325402 I | embed: ready to serve client requests
	2024-01-03 20:06:10.326651 I | embed: serving client requests on 192.168.49.2:2379
	2024-01-03 20:06:10.336881 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-03 20:06:10.337079 I | etcdserver/api: enabled capabilities for version 3.4
	
	
	==> kernel <==
	 20:10:02 up  1:52,  0 users,  load average: 0.77, 1.28, 1.94
	Linux ingress-addon-legacy-480050 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [8be5f8a4a8a38790669ba77468e26c22bcd607aba09f2ab10e9f95940da04021] <==
	I0103 20:07:57.264473       1 main.go:227] handling current node
	I0103 20:08:07.274827       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 20:08:07.274855       1 main.go:227] handling current node
	I0103 20:08:17.284252       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 20:08:17.284280       1 main.go:227] handling current node
	I0103 20:08:27.293664       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 20:08:27.293696       1 main.go:227] handling current node
	I0103 20:08:37.297578       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 20:08:37.297607       1 main.go:227] handling current node
	I0103 20:08:47.309737       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 20:08:47.309767       1 main.go:227] handling current node
	I0103 20:08:57.321972       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 20:08:57.322108       1 main.go:227] handling current node
	I0103 20:09:07.325780       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 20:09:07.325807       1 main.go:227] handling current node
	I0103 20:09:17.337007       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 20:09:17.337037       1 main.go:227] handling current node
	I0103 20:09:27.347497       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 20:09:27.347524       1 main.go:227] handling current node
	I0103 20:09:37.350660       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 20:09:37.350688       1 main.go:227] handling current node
	I0103 20:09:47.362387       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 20:09:47.362417       1 main.go:227] handling current node
	I0103 20:09:57.366385       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0103 20:09:57.366416       1 main.go:227] handling current node
	
	
	==> kube-apiserver [7e15e878cfddec788dab7e0407332da3a2d5f6c65519f63f5b0f1607fe28aa2d] <==
	I0103 20:06:14.819068       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0103 20:06:14.892743       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0103 20:06:14.959540       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0103 20:06:14.959761       1 cache.go:39] Caches are synced for autoregister controller
	I0103 20:06:14.962415       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0103 20:06:14.962545       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0103 20:06:15.658254       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0103 20:06:15.658285       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0103 20:06:15.678710       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0103 20:06:15.681798       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0103 20:06:15.681823       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0103 20:06:16.104159       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0103 20:06:16.203380       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0103 20:06:16.325951       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0103 20:06:16.327012       1 controller.go:609] quota admission added evaluator for: endpoints
	I0103 20:06:16.330947       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0103 20:06:17.122760       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0103 20:06:17.816289       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0103 20:06:17.915792       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0103 20:06:21.235592       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0103 20:06:32.829283       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0103 20:06:33.014370       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0103 20:06:52.223514       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0103 20:07:15.292398       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0103 20:09:54.712028       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	
	==> kube-controller-manager [ee702e661d9f8f910a09747308c73d03bce7456c487fa19f8afbc9779144bc06] <==
	, Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40013143d8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40004a6df8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0103 20:06:33.048722       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"05f9c09a-1de1-4986-9a0e-2eb127b86135", APIVersion:"apps/v1", ResourceVersion:"351", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-fmzzz
	I0103 20:06:33.061697       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"05f9c09a-1de1-4986-9a0e-2eb127b86135", APIVersion:"apps/v1", ResourceVersion:"351", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-qt9jp
	I0103 20:06:33.133676       1 shared_informer.go:230] Caches are synced for disruption 
	I0103 20:06:33.133708       1 disruption.go:339] Sending events to api server.
	I0103 20:06:33.169507       1 shared_informer.go:230] Caches are synced for stateful set 
	I0103 20:06:33.238533       1 shared_informer.go:230] Caches are synced for resource quota 
	I0103 20:06:33.250242       1 shared_informer.go:230] Caches are synced for resource quota 
	I0103 20:06:33.384989       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0103 20:06:33.385017       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0103 20:06:33.435607       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"37c13192-e7aa-468f-ab68-cd14e3ac1fe0", APIVersion:"apps/v1", ResourceVersion:"371", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0103 20:06:33.456257       1 shared_informer.go:230] Caches are synced for attach detach 
	I0103 20:06:33.498100       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"05f9c09a-1de1-4986-9a0e-2eb127b86135", APIVersion:"apps/v1", ResourceVersion:"372", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-qt9jp
	I0103 20:06:33.628138       1 request.go:621] Throttling request took 1.046904171s, request: GET:https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1?timeout=32s
	I0103 20:06:34.075711       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I0103 20:06:34.075762       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0103 20:06:42.832761       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0103 20:06:52.204409       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"51473b34-48c1-4de4-894c-61d1648a3304", APIVersion:"apps/v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0103 20:06:52.220559       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"eeb608ce-8608-45c7-9613-297d44c98516", APIVersion:"apps/v1", ResourceVersion:"481", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-9m2z8
	I0103 20:06:52.289929       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"830c7333-e8ce-4486-a228-a953aa93bc41", APIVersion:"batch/v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-p5tgk
	I0103 20:06:52.335565       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"6356619e-93b8-450a-a7ed-e0c55ad001ec", APIVersion:"batch/v1", ResourceVersion:"499", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-28l6d
	I0103 20:06:55.401606       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"830c7333-e8ce-4486-a228-a953aa93bc41", APIVersion:"batch/v1", ResourceVersion:"501", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0103 20:06:55.419577       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"6356619e-93b8-450a-a7ed-e0c55ad001ec", APIVersion:"batch/v1", ResourceVersion:"506", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0103 20:09:35.882665       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"ec964704-8b42-42c5-ba3b-66296bb01258", APIVersion:"apps/v1", ResourceVersion:"717", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0103 20:09:35.905371       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"4e6b40b0-8a27-4258-9c86-e566793e44bb", APIVersion:"apps/v1", ResourceVersion:"718", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-mjkfr
	
	
	==> kube-proxy [ba16032c0e5f0b22845ab218eb196634384aee55d4dc0abe8e196cd948bf49b7] <==
	W0103 20:06:34.003219       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0103 20:06:34.026888       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0103 20:06:34.026997       1 server_others.go:186] Using iptables Proxier.
	I0103 20:06:34.027335       1 server.go:583] Version: v1.18.20
	I0103 20:06:34.030177       1 config.go:133] Starting endpoints config controller
	I0103 20:06:34.031473       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0103 20:06:34.031607       1 config.go:315] Starting service config controller
	I0103 20:06:34.031649       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0103 20:06:34.131719       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0103 20:06:34.131852       1 shared_informer.go:230] Caches are synced for service config 
	
	
	==> kube-scheduler [3723d342434fe5cebcb3a8b98f5d55a789446fffb3b2bd1e4ef23306bcd56723] <==
	I0103 20:06:14.895662       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0103 20:06:14.895699       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0103 20:06:14.895754       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0103 20:06:14.900962       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0103 20:06:14.901271       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0103 20:06:14.901421       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0103 20:06:14.901624       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0103 20:06:14.901890       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0103 20:06:14.902899       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0103 20:06:14.902984       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0103 20:06:14.903042       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0103 20:06:14.903107       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0103 20:06:14.903196       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0103 20:06:14.903276       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0103 20:06:14.903355       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0103 20:06:15.715158       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0103 20:06:15.788946       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0103 20:06:15.816290       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0103 20:06:15.925646       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0103 20:06:15.928543       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0103 20:06:15.938154       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0103 20:06:17.495886       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0103 20:06:33.086092       1 factory.go:503] pod: kube-system/coredns-66bff467f8-fmzzz is already present in the active queue
	E0103 20:06:33.100652       1 factory.go:503] pod: kube-system/coredns-66bff467f8-qt9jp is already present in the active queue
	E0103 20:06:34.066770       1 factory.go:503] pod: kube-system/storage-provisioner is already present in unschedulable queue
	
	
	==> kubelet <==
	Jan 03 20:09:40 ingress-addon-legacy-480050 kubelet[1619]: I0103 20:09:40.708141    1619 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b00451ede44177245f0fe6ae141567840477a1b2245df2296d7987a5a79ab29b
	Jan 03 20:09:40 ingress-addon-legacy-480050 kubelet[1619]: I0103 20:09:40.708417    1619 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9ebf6df31d88265ad9342c366d1d7f36b353e7658f9064b0ebae1f7d301564a6
	Jan 03 20:09:40 ingress-addon-legacy-480050 kubelet[1619]: E0103 20:09:40.708653    1619 pod_workers.go:191] Error syncing pod ffc562cf-96d9-4822-b58e-26dbed130cdc ("hello-world-app-5f5d8b66bb-mjkfr_default(ffc562cf-96d9-4822-b58e-26dbed130cdc)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-mjkfr_default(ffc562cf-96d9-4822-b58e-26dbed130cdc)"
	Jan 03 20:09:41 ingress-addon-legacy-480050 kubelet[1619]: I0103 20:09:41.711012    1619 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9ebf6df31d88265ad9342c366d1d7f36b353e7658f9064b0ebae1f7d301564a6
	Jan 03 20:09:41 ingress-addon-legacy-480050 kubelet[1619]: E0103 20:09:41.711262    1619 pod_workers.go:191] Error syncing pod ffc562cf-96d9-4822-b58e-26dbed130cdc ("hello-world-app-5f5d8b66bb-mjkfr_default(ffc562cf-96d9-4822-b58e-26dbed130cdc)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-mjkfr_default(ffc562cf-96d9-4822-b58e-26dbed130cdc)"
	Jan 03 20:09:48 ingress-addon-legacy-480050 kubelet[1619]: E0103 20:09:48.302640    1619 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 03 20:09:48 ingress-addon-legacy-480050 kubelet[1619]: E0103 20:09:48.302688    1619 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 03 20:09:48 ingress-addon-legacy-480050 kubelet[1619]: E0103 20:09:48.302740    1619 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 03 20:09:48 ingress-addon-legacy-480050 kubelet[1619]: E0103 20:09:48.302779    1619 pod_workers.go:191] Error syncing pod 4eb693cc-3c33-4a67-83ed-76208b6f3043 ("kube-ingress-dns-minikube_kube-system(4eb693cc-3c33-4a67-83ed-76208b6f3043)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jan 03 20:09:51 ingress-addon-legacy-480050 kubelet[1619]: I0103 20:09:51.949795    1619 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-2wb8m" (UniqueName: "kubernetes.io/secret/4eb693cc-3c33-4a67-83ed-76208b6f3043-minikube-ingress-dns-token-2wb8m") pod "4eb693cc-3c33-4a67-83ed-76208b6f3043" (UID: "4eb693cc-3c33-4a67-83ed-76208b6f3043")
	Jan 03 20:09:51 ingress-addon-legacy-480050 kubelet[1619]: I0103 20:09:51.954283    1619 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eb693cc-3c33-4a67-83ed-76208b6f3043-minikube-ingress-dns-token-2wb8m" (OuterVolumeSpecName: "minikube-ingress-dns-token-2wb8m") pod "4eb693cc-3c33-4a67-83ed-76208b6f3043" (UID: "4eb693cc-3c33-4a67-83ed-76208b6f3043"). InnerVolumeSpecName "minikube-ingress-dns-token-2wb8m". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 03 20:09:52 ingress-addon-legacy-480050 kubelet[1619]: I0103 20:09:52.050209    1619 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-2wb8m" (UniqueName: "kubernetes.io/secret/4eb693cc-3c33-4a67-83ed-76208b6f3043-minikube-ingress-dns-token-2wb8m") on node "ingress-addon-legacy-480050" DevicePath ""
	Jan 03 20:09:53 ingress-addon-legacy-480050 kubelet[1619]: I0103 20:09:53.301739    1619 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9ebf6df31d88265ad9342c366d1d7f36b353e7658f9064b0ebae1f7d301564a6
	Jan 03 20:09:53 ingress-addon-legacy-480050 kubelet[1619]: I0103 20:09:53.729032    1619 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9ebf6df31d88265ad9342c366d1d7f36b353e7658f9064b0ebae1f7d301564a6
	Jan 03 20:09:53 ingress-addon-legacy-480050 kubelet[1619]: I0103 20:09:53.729293    1619 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 3ecbca09b4cf078d9edd65ae7d8e9ba1c6de5491c1443ca59a6345f5e2520630
	Jan 03 20:09:53 ingress-addon-legacy-480050 kubelet[1619]: E0103 20:09:53.729560    1619 pod_workers.go:191] Error syncing pod ffc562cf-96d9-4822-b58e-26dbed130cdc ("hello-world-app-5f5d8b66bb-mjkfr_default(ffc562cf-96d9-4822-b58e-26dbed130cdc)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-mjkfr_default(ffc562cf-96d9-4822-b58e-26dbed130cdc)"
	Jan 03 20:09:54 ingress-addon-legacy-480050 kubelet[1619]: E0103 20:09:54.696979    1619 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-9m2z8.17a6ef4668efd9e1", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-9m2z8", UID:"5eb9834c-c1d1-4fe0-a1a8-c2828c61d076", APIVersion:"v1", ResourceVersion:"487", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-480050"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15d8da4a957a5e1, ext:216946011017, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15d8da4a957a5e1, ext:216946011017, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-9m2z8.17a6ef4668efd9e1" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 03 20:09:54 ingress-addon-legacy-480050 kubelet[1619]: E0103 20:09:54.708495    1619 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-9m2z8.17a6ef4668efd9e1", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-9m2z8", UID:"5eb9834c-c1d1-4fe0-a1a8-c2828c61d076", APIVersion:"v1", ResourceVersion:"487", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-480050"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15d8da4a957a5e1, ext:216946011017, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15d8da4a9db0d36, ext:216954622686, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-9m2z8.17a6ef4668efd9e1" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 03 20:09:57 ingress-addon-legacy-480050 kubelet[1619]: W0103 20:09:57.739788    1619 pod_container_deletor.go:77] Container "65c6f57b6ba46f6c42e77e80b23ff53aaec731acfdbcfc4d00d9228ab72a4eab" not found in pod's containers
	Jan 03 20:09:58 ingress-addon-legacy-480050 kubelet[1619]: I0103 20:09:58.870396    1619 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-lpvvc" (UniqueName: "kubernetes.io/secret/5eb9834c-c1d1-4fe0-a1a8-c2828c61d076-ingress-nginx-token-lpvvc") pod "5eb9834c-c1d1-4fe0-a1a8-c2828c61d076" (UID: "5eb9834c-c1d1-4fe0-a1a8-c2828c61d076")
	Jan 03 20:09:58 ingress-addon-legacy-480050 kubelet[1619]: I0103 20:09:58.870464    1619 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/5eb9834c-c1d1-4fe0-a1a8-c2828c61d076-webhook-cert") pod "5eb9834c-c1d1-4fe0-a1a8-c2828c61d076" (UID: "5eb9834c-c1d1-4fe0-a1a8-c2828c61d076")
	Jan 03 20:09:58 ingress-addon-legacy-480050 kubelet[1619]: I0103 20:09:58.878899    1619 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eb9834c-c1d1-4fe0-a1a8-c2828c61d076-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "5eb9834c-c1d1-4fe0-a1a8-c2828c61d076" (UID: "5eb9834c-c1d1-4fe0-a1a8-c2828c61d076"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 03 20:09:58 ingress-addon-legacy-480050 kubelet[1619]: I0103 20:09:58.881435    1619 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eb9834c-c1d1-4fe0-a1a8-c2828c61d076-ingress-nginx-token-lpvvc" (OuterVolumeSpecName: "ingress-nginx-token-lpvvc") pod "5eb9834c-c1d1-4fe0-a1a8-c2828c61d076" (UID: "5eb9834c-c1d1-4fe0-a1a8-c2828c61d076"). InnerVolumeSpecName "ingress-nginx-token-lpvvc". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 03 20:09:58 ingress-addon-legacy-480050 kubelet[1619]: I0103 20:09:58.970855    1619 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/5eb9834c-c1d1-4fe0-a1a8-c2828c61d076-webhook-cert") on node "ingress-addon-legacy-480050" DevicePath ""
	Jan 03 20:09:58 ingress-addon-legacy-480050 kubelet[1619]: I0103 20:09:58.970912    1619 reconciler.go:319] Volume detached for volume "ingress-nginx-token-lpvvc" (UniqueName: "kubernetes.io/secret/5eb9834c-c1d1-4fe0-a1a8-c2828c61d076-ingress-nginx-token-lpvvc") on node "ingress-addon-legacy-480050" DevicePath ""
	
	
	==> storage-provisioner [1b09b0c7a0941f89ad3ad65a6f9c8412b26111adecec091f9c865337feadf885] <==
	I0103 20:06:46.701312       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0103 20:06:46.718254       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0103 20:06:46.718709       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0103 20:06:46.726467       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0103 20:06:46.726961       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"39a20ef7-3464-4f81-b796-811dce054eea", APIVersion:"v1", ResourceVersion:"436", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-480050_ebd168ef-d747-45dc-bc9f-22df8f0fa4c1 became leader
	I0103 20:06:46.727083       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-480050_ebd168ef-d747-45dc-bc9f-22df8f0fa4c1!
	I0103 20:06:46.827704       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-480050_ebd168ef-d747-45dc-bc9f-22df8f0fa4c1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-480050 -n ingress-addon-legacy-480050
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-480050 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (179.21s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-004925 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-004925 -- exec busybox-5bc68d56bd-fs9dz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-004925 -- exec busybox-5bc68d56bd-fs9dz -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-004925 -- exec busybox-5bc68d56bd-fs9dz -- sh -c "ping -c 1 192.168.58.1": exit status 1 (246.859551ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-fs9dz): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-004925 -- exec busybox-5bc68d56bd-m75vn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-004925 -- exec busybox-5bc68d56bd-m75vn -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-004925 -- exec busybox-5bc68d56bd-m75vn -- sh -c "ping -c 1 192.168.58.1": exit status 1 (226.409928ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-m75vn): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-004925
helpers_test.go:235: (dbg) docker inspect multinode-004925:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a8b5d16b1951a483b6aef678c3a9e4af0d01efc76e4ab6e72a5f853aa60f6da3",
	        "Created": "2024-01-03T20:16:04.226673753Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 478940,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-03T20:16:04.549410024Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3bfff26a1ae256fcdf8f10a333efdefbe26edc5c1669e1cc5c973c016e44d3c4",
	        "ResolvConfPath": "/var/lib/docker/containers/a8b5d16b1951a483b6aef678c3a9e4af0d01efc76e4ab6e72a5f853aa60f6da3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a8b5d16b1951a483b6aef678c3a9e4af0d01efc76e4ab6e72a5f853aa60f6da3/hostname",
	        "HostsPath": "/var/lib/docker/containers/a8b5d16b1951a483b6aef678c3a9e4af0d01efc76e4ab6e72a5f853aa60f6da3/hosts",
	        "LogPath": "/var/lib/docker/containers/a8b5d16b1951a483b6aef678c3a9e4af0d01efc76e4ab6e72a5f853aa60f6da3/a8b5d16b1951a483b6aef678c3a9e4af0d01efc76e4ab6e72a5f853aa60f6da3-json.log",
	        "Name": "/multinode-004925",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-004925:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-004925",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/42df69065c6af11a5f073553f1737b40415f4c7c8016ba15c467366e8e2b5d7e-init/diff:/var/lib/docker/overlay2/0cefd74c13c0ff527608d5d1778b7a3893c62167f91a1554bd1fa9cb8110135e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/42df69065c6af11a5f073553f1737b40415f4c7c8016ba15c467366e8e2b5d7e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/42df69065c6af11a5f073553f1737b40415f4c7c8016ba15c467366e8e2b5d7e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/42df69065c6af11a5f073553f1737b40415f4c7c8016ba15c467366e8e2b5d7e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-004925",
	                "Source": "/var/lib/docker/volumes/multinode-004925/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-004925",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-004925",
	                "name.minikube.sigs.k8s.io": "multinode-004925",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d28765d602b3ae9250a96c2a04f088c0548c38bc76357c4cf54660af6c5f0664",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33177"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d28765d602b3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-004925": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a8b5d16b1951",
	                        "multinode-004925"
	                    ],
	                    "NetworkID": "5ad9a395bb966e0099c2dcc747b5dc2b9efdc6c6db33466ff8d84144e692ed4a",
	                    "EndpointID": "6013c254b82b3c6079bfa863de0aa7573f4789eb54ca40ea1346cb5fd5c5f1f1",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-004925 -n multinode-004925
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-004925 logs -n 25: (1.587166834s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-833444                           | mount-start-2-833444 | jenkins | v1.32.0 | 03 Jan 24 20:15 UTC | 03 Jan 24 20:15 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-833444 ssh -- ls                    | mount-start-2-833444 | jenkins | v1.32.0 | 03 Jan 24 20:15 UTC | 03 Jan 24 20:15 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-831368                           | mount-start-1-831368 | jenkins | v1.32.0 | 03 Jan 24 20:15 UTC | 03 Jan 24 20:15 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-833444 ssh -- ls                    | mount-start-2-833444 | jenkins | v1.32.0 | 03 Jan 24 20:15 UTC | 03 Jan 24 20:15 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-833444                           | mount-start-2-833444 | jenkins | v1.32.0 | 03 Jan 24 20:15 UTC | 03 Jan 24 20:15 UTC |
	| start   | -p mount-start-2-833444                           | mount-start-2-833444 | jenkins | v1.32.0 | 03 Jan 24 20:15 UTC | 03 Jan 24 20:15 UTC |
	| ssh     | mount-start-2-833444 ssh -- ls                    | mount-start-2-833444 | jenkins | v1.32.0 | 03 Jan 24 20:15 UTC | 03 Jan 24 20:15 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-833444                           | mount-start-2-833444 | jenkins | v1.32.0 | 03 Jan 24 20:15 UTC | 03 Jan 24 20:15 UTC |
	| delete  | -p mount-start-1-831368                           | mount-start-1-831368 | jenkins | v1.32.0 | 03 Jan 24 20:15 UTC | 03 Jan 24 20:15 UTC |
	| start   | -p multinode-004925                               | multinode-004925     | jenkins | v1.32.0 | 03 Jan 24 20:15 UTC | 03 Jan 24 20:18 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-004925 -- apply -f                   | multinode-004925     | jenkins | v1.32.0 | 03 Jan 24 20:18 UTC | 03 Jan 24 20:18 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-004925 -- rollout                    | multinode-004925     | jenkins | v1.32.0 | 03 Jan 24 20:18 UTC | 03 Jan 24 20:18 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-004925 -- get pods -o                | multinode-004925     | jenkins | v1.32.0 | 03 Jan 24 20:18 UTC | 03 Jan 24 20:18 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-004925 -- get pods -o                | multinode-004925     | jenkins | v1.32.0 | 03 Jan 24 20:18 UTC | 03 Jan 24 20:18 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-004925 -- exec                       | multinode-004925     | jenkins | v1.32.0 | 03 Jan 24 20:18 UTC | 03 Jan 24 20:18 UTC |
	|         | busybox-5bc68d56bd-fs9dz --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-004925 -- exec                       | multinode-004925     | jenkins | v1.32.0 | 03 Jan 24 20:18 UTC | 03 Jan 24 20:18 UTC |
	|         | busybox-5bc68d56bd-m75vn --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-004925 -- exec                       | multinode-004925     | jenkins | v1.32.0 | 03 Jan 24 20:18 UTC | 03 Jan 24 20:18 UTC |
	|         | busybox-5bc68d56bd-fs9dz --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-004925 -- exec                       | multinode-004925     | jenkins | v1.32.0 | 03 Jan 24 20:18 UTC | 03 Jan 24 20:18 UTC |
	|         | busybox-5bc68d56bd-m75vn --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-004925 -- exec                       | multinode-004925     | jenkins | v1.32.0 | 03 Jan 24 20:18 UTC | 03 Jan 24 20:18 UTC |
	|         | busybox-5bc68d56bd-fs9dz -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-004925 -- exec                       | multinode-004925     | jenkins | v1.32.0 | 03 Jan 24 20:18 UTC | 03 Jan 24 20:18 UTC |
	|         | busybox-5bc68d56bd-m75vn -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-004925 -- get pods -o                | multinode-004925     | jenkins | v1.32.0 | 03 Jan 24 20:18 UTC | 03 Jan 24 20:18 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-004925 -- exec                       | multinode-004925     | jenkins | v1.32.0 | 03 Jan 24 20:18 UTC | 03 Jan 24 20:18 UTC |
	|         | busybox-5bc68d56bd-fs9dz                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-004925 -- exec                       | multinode-004925     | jenkins | v1.32.0 | 03 Jan 24 20:18 UTC |                     |
	|         | busybox-5bc68d56bd-fs9dz -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-004925 -- exec                       | multinode-004925     | jenkins | v1.32.0 | 03 Jan 24 20:18 UTC | 03 Jan 24 20:18 UTC |
	|         | busybox-5bc68d56bd-m75vn                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-004925 -- exec                       | multinode-004925     | jenkins | v1.32.0 | 03 Jan 24 20:18 UTC |                     |
	|         | busybox-5bc68d56bd-m75vn -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 20:15:58
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 20:15:58.379189  478496 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:15:58.379416  478496 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:15:58.379431  478496 out.go:309] Setting ErrFile to fd 2...
	I0103 20:15:58.379439  478496 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:15:58.379725  478496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-409390/.minikube/bin
	I0103 20:15:58.380198  478496 out.go:303] Setting JSON to false
	I0103 20:15:58.381127  478496 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":7108,"bootTime":1704305851,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0103 20:15:58.381211  478496 start.go:138] virtualization:  
	I0103 20:15:58.383620  478496 out.go:177] * [multinode-004925] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0103 20:15:58.385948  478496 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 20:15:58.387648  478496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 20:15:58.386070  478496 notify.go:220] Checking for updates...
	I0103 20:15:58.391171  478496 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 20:15:58.392864  478496 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube
	I0103 20:15:58.394739  478496 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0103 20:15:58.396342  478496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 20:15:58.398645  478496 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 20:15:58.423566  478496 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 20:15:58.423673  478496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 20:15:58.507051  478496 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-03 20:15:58.496081455 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 20:15:58.507157  478496 docker.go:295] overlay module found
	I0103 20:15:58.510096  478496 out.go:177] * Using the docker driver based on user configuration
	I0103 20:15:58.511803  478496 start.go:298] selected driver: docker
	I0103 20:15:58.511820  478496 start.go:902] validating driver "docker" against <nil>
	I0103 20:15:58.511834  478496 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 20:15:58.512477  478496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 20:15:58.578743  478496 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-03 20:15:58.568862511 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 20:15:58.578912  478496 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0103 20:15:58.579168  478496 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0103 20:15:58.581090  478496 out.go:177] * Using Docker driver with root privileges
	I0103 20:15:58.582834  478496 cni.go:84] Creating CNI manager for ""
	I0103 20:15:58.582854  478496 cni.go:136] 0 nodes found, recommending kindnet
	I0103 20:15:58.582864  478496 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0103 20:15:58.582877  478496 start_flags.go:323] config:
	{Name:multinode-004925 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-004925 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:15:58.584753  478496 out.go:177] * Starting control plane node multinode-004925 in cluster multinode-004925
	I0103 20:15:58.586398  478496 cache.go:121] Beginning downloading kic base image for docker with crio
	I0103 20:15:58.588099  478496 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0103 20:15:58.590021  478496 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:15:58.590068  478496 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0103 20:15:58.590081  478496 cache.go:56] Caching tarball of preloaded images
	I0103 20:15:58.590110  478496 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 20:15:58.590161  478496 preload.go:174] Found /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0103 20:15:58.590171  478496 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0103 20:15:58.590564  478496 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/config.json ...
	I0103 20:15:58.590596  478496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/config.json: {Name:mk327a2190f206efbeed251d5109c51b47955b8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:15:58.607498  478496 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0103 20:15:58.607523  478496 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0103 20:15:58.607544  478496 cache.go:194] Successfully downloaded all kic artifacts
	I0103 20:15:58.607605  478496 start.go:365] acquiring machines lock for multinode-004925: {Name:mkead7e3835161d83f4f100a39e4193a85e93705 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:15:58.607724  478496 start.go:369] acquired machines lock for "multinode-004925" in 96.787µs
	I0103 20:15:58.607755  478496 start.go:93] Provisioning new machine with config: &{Name:multinode-004925 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-004925 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:15:58.607837  478496 start.go:125] createHost starting for "" (driver="docker")
	I0103 20:15:58.610258  478496 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0103 20:15:58.610569  478496 start.go:159] libmachine.API.Create for "multinode-004925" (driver="docker")
	I0103 20:15:58.610607  478496 client.go:168] LocalClient.Create starting
	I0103 20:15:58.610679  478496 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem
	I0103 20:15:58.610710  478496 main.go:141] libmachine: Decoding PEM data...
	I0103 20:15:58.610724  478496 main.go:141] libmachine: Parsing certificate...
	I0103 20:15:58.610777  478496 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem
	I0103 20:15:58.610794  478496 main.go:141] libmachine: Decoding PEM data...
	I0103 20:15:58.610805  478496 main.go:141] libmachine: Parsing certificate...
	I0103 20:15:58.611158  478496 cli_runner.go:164] Run: docker network inspect multinode-004925 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0103 20:15:58.629005  478496 cli_runner.go:211] docker network inspect multinode-004925 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0103 20:15:58.629084  478496 network_create.go:281] running [docker network inspect multinode-004925] to gather additional debugging logs...
	I0103 20:15:58.629106  478496 cli_runner.go:164] Run: docker network inspect multinode-004925
	W0103 20:15:58.647070  478496 cli_runner.go:211] docker network inspect multinode-004925 returned with exit code 1
	I0103 20:15:58.647106  478496 network_create.go:284] error running [docker network inspect multinode-004925]: docker network inspect multinode-004925: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-004925 not found
	I0103 20:15:58.647119  478496 network_create.go:286] output of [docker network inspect multinode-004925]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-004925 not found
	
	** /stderr **
	I0103 20:15:58.647225  478496 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0103 20:15:58.665338  478496 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e48a1c7f0405 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:af:08:39:14} reservation:<nil>}
	I0103 20:15:58.665689  478496 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000b5cd80}
	I0103 20:15:58.665717  478496 network_create.go:124] attempt to create docker network multinode-004925 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0103 20:15:58.665777  478496 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-004925 multinode-004925
	I0103 20:15:58.733607  478496 network_create.go:108] docker network multinode-004925 192.168.58.0/24 created
	I0103 20:15:58.733640  478496 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-004925" container
	I0103 20:15:58.733719  478496 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0103 20:15:58.751135  478496 cli_runner.go:164] Run: docker volume create multinode-004925 --label name.minikube.sigs.k8s.io=multinode-004925 --label created_by.minikube.sigs.k8s.io=true
	I0103 20:15:58.769072  478496 oci.go:103] Successfully created a docker volume multinode-004925
	I0103 20:15:58.769166  478496 cli_runner.go:164] Run: docker run --rm --name multinode-004925-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-004925 --entrypoint /usr/bin/test -v multinode-004925:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I0103 20:15:59.371636  478496 oci.go:107] Successfully prepared a docker volume multinode-004925
	I0103 20:15:59.371683  478496 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:15:59.371703  478496 kic.go:194] Starting extracting preloaded images to volume ...
	I0103 20:15:59.371780  478496 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-004925:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I0103 20:16:04.132472  478496 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-004925:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (4.760633683s)
	I0103 20:16:04.132506  478496 kic.go:203] duration metric: took 4.760800 seconds to extract preloaded images to volume
	W0103 20:16:04.132668  478496 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0103 20:16:04.132794  478496 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0103 20:16:04.207379  478496 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-004925 --name multinode-004925 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-004925 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-004925 --network multinode-004925 --ip 192.168.58.2 --volume multinode-004925:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0103 20:16:04.558634  478496 cli_runner.go:164] Run: docker container inspect multinode-004925 --format={{.State.Running}}
	I0103 20:16:04.584073  478496 cli_runner.go:164] Run: docker container inspect multinode-004925 --format={{.State.Status}}
	I0103 20:16:04.618035  478496 cli_runner.go:164] Run: docker exec multinode-004925 stat /var/lib/dpkg/alternatives/iptables
	I0103 20:16:04.692272  478496 oci.go:144] the created container "multinode-004925" has a running status.
	I0103 20:16:04.692302  478496 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/multinode-004925/id_rsa...
	I0103 20:16:05.351099  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/multinode-004925/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0103 20:16:05.351196  478496 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17885-409390/.minikube/machines/multinode-004925/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0103 20:16:05.388026  478496 cli_runner.go:164] Run: docker container inspect multinode-004925 --format={{.State.Status}}
	I0103 20:16:05.413559  478496 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0103 20:16:05.413577  478496 kic_runner.go:114] Args: [docker exec --privileged multinode-004925 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0103 20:16:05.492090  478496 cli_runner.go:164] Run: docker container inspect multinode-004925 --format={{.State.Status}}
	I0103 20:16:05.526072  478496 machine.go:88] provisioning docker machine ...
	I0103 20:16:05.526122  478496 ubuntu.go:169] provisioning hostname "multinode-004925"
	I0103 20:16:05.526199  478496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-004925
	I0103 20:16:05.547599  478496 main.go:141] libmachine: Using SSH client type: native
	I0103 20:16:05.548048  478496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0103 20:16:05.548061  478496 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-004925 && echo "multinode-004925" | sudo tee /etc/hostname
	I0103 20:16:05.709308  478496 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-004925
	
	I0103 20:16:05.709389  478496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-004925
	I0103 20:16:05.732133  478496 main.go:141] libmachine: Using SSH client type: native
	I0103 20:16:05.732595  478496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0103 20:16:05.732613  478496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-004925' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-004925/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-004925' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:16:05.880004  478496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:16:05.880031  478496 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17885-409390/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-409390/.minikube}
	I0103 20:16:05.880061  478496 ubuntu.go:177] setting up certificates
	I0103 20:16:05.880077  478496 provision.go:83] configureAuth start
	I0103 20:16:05.880140  478496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-004925
	I0103 20:16:05.899757  478496 provision.go:138] copyHostCerts
	I0103 20:16:05.899800  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem
	I0103 20:16:05.899841  478496 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem, removing ...
	I0103 20:16:05.899852  478496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem
	I0103 20:16:05.899934  478496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem (1078 bytes)
	I0103 20:16:05.900015  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem
	I0103 20:16:05.900038  478496 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem, removing ...
	I0103 20:16:05.900046  478496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem
	I0103 20:16:05.900076  478496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem (1123 bytes)
	I0103 20:16:05.900123  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem
	I0103 20:16:05.900143  478496 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem, removing ...
	I0103 20:16:05.900147  478496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem
	I0103 20:16:05.900179  478496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem (1679 bytes)
	I0103 20:16:05.900227  478496 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem org=jenkins.multinode-004925 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-004925]
	I0103 20:16:06.627104  478496 provision.go:172] copyRemoteCerts
	I0103 20:16:06.627187  478496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:16:06.627265  478496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-004925
	I0103 20:16:06.644884  478496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/multinode-004925/id_rsa Username:docker}
	I0103 20:16:06.745027  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0103 20:16:06.745089  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0103 20:16:06.774104  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0103 20:16:06.774198  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 20:16:06.802632  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0103 20:16:06.802690  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:16:06.830776  478496 provision.go:86] duration metric: configureAuth took 950.660835ms
	I0103 20:16:06.830844  478496 ubuntu.go:193] setting minikube options for container-runtime
	I0103 20:16:06.831079  478496 config.go:182] Loaded profile config "multinode-004925": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:16:06.831226  478496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-004925
	I0103 20:16:06.849324  478496 main.go:141] libmachine: Using SSH client type: native
	I0103 20:16:06.849758  478496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0103 20:16:06.849781  478496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:16:07.111347  478496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:16:07.111374  478496 machine.go:91] provisioned docker machine in 1.585265739s
	I0103 20:16:07.111385  478496 client.go:171] LocalClient.Create took 8.500770035s
	I0103 20:16:07.111399  478496 start.go:167] duration metric: libmachine.API.Create for "multinode-004925" took 8.500829127s
	I0103 20:16:07.111406  478496 start.go:300] post-start starting for "multinode-004925" (driver="docker")
	I0103 20:16:07.111418  478496 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:16:07.111485  478496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:16:07.111532  478496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-004925
	I0103 20:16:07.131912  478496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/multinode-004925/id_rsa Username:docker}
	I0103 20:16:07.233738  478496 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:16:07.237692  478496 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0103 20:16:07.237715  478496 command_runner.go:130] > NAME="Ubuntu"
	I0103 20:16:07.237722  478496 command_runner.go:130] > VERSION_ID="22.04"
	I0103 20:16:07.237729  478496 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0103 20:16:07.237735  478496 command_runner.go:130] > VERSION_CODENAME=jammy
	I0103 20:16:07.237740  478496 command_runner.go:130] > ID=ubuntu
	I0103 20:16:07.237747  478496 command_runner.go:130] > ID_LIKE=debian
	I0103 20:16:07.237753  478496 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0103 20:16:07.237759  478496 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0103 20:16:07.237773  478496 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0103 20:16:07.237784  478496 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0103 20:16:07.237792  478496 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0103 20:16:07.237834  478496 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0103 20:16:07.237867  478496 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0103 20:16:07.237883  478496 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0103 20:16:07.237891  478496 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0103 20:16:07.237904  478496 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-409390/.minikube/addons for local assets ...
	I0103 20:16:07.237964  478496 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-409390/.minikube/files for local assets ...
	I0103 20:16:07.238052  478496 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem -> 4147632.pem in /etc/ssl/certs
	I0103 20:16:07.238095  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem -> /etc/ssl/certs/4147632.pem
	I0103 20:16:07.238197  478496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:16:07.248779  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem --> /etc/ssl/certs/4147632.pem (1708 bytes)
	I0103 20:16:07.277808  478496 start.go:303] post-start completed in 166.386109ms
	I0103 20:16:07.278193  478496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-004925
	I0103 20:16:07.296119  478496 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/config.json ...
	I0103 20:16:07.296396  478496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 20:16:07.296445  478496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-004925
	I0103 20:16:07.314413  478496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/multinode-004925/id_rsa Username:docker}
	I0103 20:16:07.408565  478496 command_runner.go:130] > 18%!
	(MISSING)I0103 20:16:07.408636  478496 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0103 20:16:07.413952  478496 command_runner.go:130] > 161G
	I0103 20:16:07.414426  478496 start.go:128] duration metric: createHost completed in 8.806574414s
	I0103 20:16:07.414445  478496 start.go:83] releasing machines lock for "multinode-004925", held for 8.806709657s
	I0103 20:16:07.414542  478496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-004925
	I0103 20:16:07.432609  478496 ssh_runner.go:195] Run: cat /version.json
	I0103 20:16:07.432665  478496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-004925
	I0103 20:16:07.432675  478496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:16:07.432741  478496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-004925
	I0103 20:16:07.451109  478496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/multinode-004925/id_rsa Username:docker}
	I0103 20:16:07.460404  478496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/multinode-004925/id_rsa Username:docker}
	I0103 20:16:07.687890  478496 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0103 20:16:07.687973  478496 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1703498848-17857", "minikube_version": "v1.32.0", "commit": "d18dc8d014b22564d2860ddb02a821a21df70433"}
	I0103 20:16:07.688109  478496 ssh_runner.go:195] Run: systemctl --version
	I0103 20:16:07.693604  478496 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I0103 20:16:07.693642  478496 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0103 20:16:07.694016  478496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:16:07.843696  478496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 20:16:07.849335  478496 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0103 20:16:07.849360  478496 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0103 20:16:07.849368  478496 command_runner.go:130] > Device: 36h/54d	Inode: 2346105     Links: 1
	I0103 20:16:07.849376  478496 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0103 20:16:07.849383  478496 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0103 20:16:07.849389  478496 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0103 20:16:07.849395  478496 command_runner.go:130] > Change: 2024-01-03 19:53:09.608915467 +0000
	I0103 20:16:07.849401  478496 command_runner.go:130] >  Birth: 2024-01-03 19:53:09.608915467 +0000
	I0103 20:16:07.849696  478496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:16:07.873856  478496 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0103 20:16:07.873938  478496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:16:07.910836  478496 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0103 20:16:07.910869  478496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0103 20:16:07.910877  478496 start.go:475] detecting cgroup driver to use...
	I0103 20:16:07.910908  478496 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 20:16:07.910962  478496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:16:07.930358  478496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:16:07.943699  478496 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:16:07.943799  478496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:16:07.959828  478496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:16:07.976033  478496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:16:08.083548  478496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:16:08.196088  478496 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0103 20:16:08.196165  478496 docker.go:219] disabling docker service ...
	I0103 20:16:08.196233  478496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:16:08.220165  478496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:16:08.235743  478496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:16:08.344452  478496 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0103 20:16:08.344542  478496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:16:08.454686  478496 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0103 20:16:08.454798  478496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:16:08.469787  478496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:16:08.489650  478496 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0103 20:16:08.491147  478496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 20:16:08.491214  478496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:16:08.503413  478496 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:16:08.503495  478496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:16:08.516016  478496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:16:08.528411  478496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:16:08.541025  478496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:16:08.553827  478496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:16:08.563863  478496 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0103 20:16:08.565209  478496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:16:08.575777  478496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:16:08.665258  478496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:16:08.789549  478496 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:16:08.789667  478496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:16:08.794491  478496 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0103 20:16:08.794524  478496 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0103 20:16:08.794533  478496 command_runner.go:130] > Device: 43h/67d	Inode: 190         Links: 1
	I0103 20:16:08.794542  478496 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0103 20:16:08.794548  478496 command_runner.go:130] > Access: 2024-01-03 20:16:08.771437002 +0000
	I0103 20:16:08.794555  478496 command_runner.go:130] > Modify: 2024-01-03 20:16:08.771437002 +0000
	I0103 20:16:08.794561  478496 command_runner.go:130] > Change: 2024-01-03 20:16:08.771437002 +0000
	I0103 20:16:08.794566  478496 command_runner.go:130] >  Birth: -
	I0103 20:16:08.794653  478496 start.go:543] Will wait 60s for crictl version
	I0103 20:16:08.794708  478496 ssh_runner.go:195] Run: which crictl
	I0103 20:16:08.799267  478496 command_runner.go:130] > /usr/bin/crictl
	I0103 20:16:08.799554  478496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:16:08.843303  478496 command_runner.go:130] > Version:  0.1.0
	I0103 20:16:08.843370  478496 command_runner.go:130] > RuntimeName:  cri-o
	I0103 20:16:08.843391  478496 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0103 20:16:08.843414  478496 command_runner.go:130] > RuntimeApiVersion:  v1
	I0103 20:16:08.846057  478496 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0103 20:16:08.846207  478496 ssh_runner.go:195] Run: crio --version
	I0103 20:16:08.888397  478496 command_runner.go:130] > crio version 1.24.6
	I0103 20:16:08.888421  478496 command_runner.go:130] > Version:          1.24.6
	I0103 20:16:08.888430  478496 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0103 20:16:08.888436  478496 command_runner.go:130] > GitTreeState:     clean
	I0103 20:16:08.888443  478496 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0103 20:16:08.888448  478496 command_runner.go:130] > GoVersion:        go1.18.2
	I0103 20:16:08.888454  478496 command_runner.go:130] > Compiler:         gc
	I0103 20:16:08.888459  478496 command_runner.go:130] > Platform:         linux/arm64
	I0103 20:16:08.888472  478496 command_runner.go:130] > Linkmode:         dynamic
	I0103 20:16:08.888488  478496 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0103 20:16:08.888497  478496 command_runner.go:130] > SeccompEnabled:   true
	I0103 20:16:08.888507  478496 command_runner.go:130] > AppArmorEnabled:  false
	I0103 20:16:08.890557  478496 ssh_runner.go:195] Run: crio --version
	I0103 20:16:08.933227  478496 command_runner.go:130] > crio version 1.24.6
	I0103 20:16:08.933253  478496 command_runner.go:130] > Version:          1.24.6
	I0103 20:16:08.933263  478496 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0103 20:16:08.933269  478496 command_runner.go:130] > GitTreeState:     clean
	I0103 20:16:08.933276  478496 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0103 20:16:08.933282  478496 command_runner.go:130] > GoVersion:        go1.18.2
	I0103 20:16:08.933288  478496 command_runner.go:130] > Compiler:         gc
	I0103 20:16:08.933296  478496 command_runner.go:130] > Platform:         linux/arm64
	I0103 20:16:08.933303  478496 command_runner.go:130] > Linkmode:         dynamic
	I0103 20:16:08.933318  478496 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0103 20:16:08.933324  478496 command_runner.go:130] > SeccompEnabled:   true
	I0103 20:16:08.933335  478496 command_runner.go:130] > AppArmorEnabled:  false
	I0103 20:16:08.938671  478496 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0103 20:16:08.940433  478496 cli_runner.go:164] Run: docker network inspect multinode-004925 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0103 20:16:08.957778  478496 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0103 20:16:08.962539  478496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:16:08.976101  478496 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:16:08.976173  478496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:16:09.043944  478496 command_runner.go:130] > {
	I0103 20:16:09.043969  478496 command_runner.go:130] >   "images": [
	I0103 20:16:09.043975  478496 command_runner.go:130] >     {
	I0103 20:16:09.043985  478496 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I0103 20:16:09.043990  478496 command_runner.go:130] >       "repoTags": [
	I0103 20:16:09.043998  478496 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0103 20:16:09.044002  478496 command_runner.go:130] >       ],
	I0103 20:16:09.044008  478496 command_runner.go:130] >       "repoDigests": [
	I0103 20:16:09.044019  478496 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0103 20:16:09.044030  478496 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I0103 20:16:09.044037  478496 command_runner.go:130] >       ],
	I0103 20:16:09.044045  478496 command_runner.go:130] >       "size": "60867618",
	I0103 20:16:09.044051  478496 command_runner.go:130] >       "uid": null,
	I0103 20:16:09.044056  478496 command_runner.go:130] >       "username": "",
	I0103 20:16:09.044072  478496 command_runner.go:130] >       "spec": null,
	I0103 20:16:09.044081  478496 command_runner.go:130] >       "pinned": false
	I0103 20:16:09.044086  478496 command_runner.go:130] >     },
	I0103 20:16:09.044091  478496 command_runner.go:130] >     {
	I0103 20:16:09.044100  478496 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0103 20:16:09.044108  478496 command_runner.go:130] >       "repoTags": [
	I0103 20:16:09.044115  478496 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0103 20:16:09.044122  478496 command_runner.go:130] >       ],
	I0103 20:16:09.044130  478496 command_runner.go:130] >       "repoDigests": [
	I0103 20:16:09.044140  478496 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0103 20:16:09.044153  478496 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0103 20:16:09.044158  478496 command_runner.go:130] >       ],
	I0103 20:16:09.044168  478496 command_runner.go:130] >       "size": "29037500",
	I0103 20:16:09.044176  478496 command_runner.go:130] >       "uid": null,
	I0103 20:16:09.044181  478496 command_runner.go:130] >       "username": "",
	I0103 20:16:09.044191  478496 command_runner.go:130] >       "spec": null,
	I0103 20:16:09.044196  478496 command_runner.go:130] >       "pinned": false
	I0103 20:16:09.044201  478496 command_runner.go:130] >     },
	I0103 20:16:09.044207  478496 command_runner.go:130] >     {
	I0103 20:16:09.044215  478496 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0103 20:16:09.044223  478496 command_runner.go:130] >       "repoTags": [
	I0103 20:16:09.044229  478496 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0103 20:16:09.044235  478496 command_runner.go:130] >       ],
	I0103 20:16:09.044242  478496 command_runner.go:130] >       "repoDigests": [
	I0103 20:16:09.044252  478496 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0103 20:16:09.044264  478496 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0103 20:16:09.044268  478496 command_runner.go:130] >       ],
	I0103 20:16:09.044274  478496 command_runner.go:130] >       "size": "51393451",
	I0103 20:16:09.044279  478496 command_runner.go:130] >       "uid": null,
	I0103 20:16:09.044288  478496 command_runner.go:130] >       "username": "",
	I0103 20:16:09.044295  478496 command_runner.go:130] >       "spec": null,
	I0103 20:16:09.044300  478496 command_runner.go:130] >       "pinned": false
	I0103 20:16:09.044307  478496 command_runner.go:130] >     },
	I0103 20:16:09.044313  478496 command_runner.go:130] >     {
	I0103 20:16:09.044321  478496 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I0103 20:16:09.044329  478496 command_runner.go:130] >       "repoTags": [
	I0103 20:16:09.044336  478496 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0103 20:16:09.044344  478496 command_runner.go:130] >       ],
	I0103 20:16:09.044350  478496 command_runner.go:130] >       "repoDigests": [
	I0103 20:16:09.044359  478496 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I0103 20:16:09.044368  478496 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I0103 20:16:09.044380  478496 command_runner.go:130] >       ],
	I0103 20:16:09.044389  478496 command_runner.go:130] >       "size": "182203183",
	I0103 20:16:09.044394  478496 command_runner.go:130] >       "uid": {
	I0103 20:16:09.044399  478496 command_runner.go:130] >         "value": "0"
	I0103 20:16:09.044407  478496 command_runner.go:130] >       },
	I0103 20:16:09.044412  478496 command_runner.go:130] >       "username": "",
	I0103 20:16:09.044417  478496 command_runner.go:130] >       "spec": null,
	I0103 20:16:09.044425  478496 command_runner.go:130] >       "pinned": false
	I0103 20:16:09.044430  478496 command_runner.go:130] >     },
	I0103 20:16:09.044434  478496 command_runner.go:130] >     {
	I0103 20:16:09.044451  478496 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I0103 20:16:09.044460  478496 command_runner.go:130] >       "repoTags": [
	I0103 20:16:09.044467  478496 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0103 20:16:09.044472  478496 command_runner.go:130] >       ],
	I0103 20:16:09.044479  478496 command_runner.go:130] >       "repoDigests": [
	I0103 20:16:09.044489  478496 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I0103 20:16:09.044501  478496 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I0103 20:16:09.044506  478496 command_runner.go:130] >       ],
	I0103 20:16:09.044513  478496 command_runner.go:130] >       "size": "121119694",
	I0103 20:16:09.044518  478496 command_runner.go:130] >       "uid": {
	I0103 20:16:09.044523  478496 command_runner.go:130] >         "value": "0"
	I0103 20:16:09.044528  478496 command_runner.go:130] >       },
	I0103 20:16:09.044534  478496 command_runner.go:130] >       "username": "",
	I0103 20:16:09.044541  478496 command_runner.go:130] >       "spec": null,
	I0103 20:16:09.044548  478496 command_runner.go:130] >       "pinned": false
	I0103 20:16:09.044553  478496 command_runner.go:130] >     },
	I0103 20:16:09.044567  478496 command_runner.go:130] >     {
	I0103 20:16:09.044575  478496 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I0103 20:16:09.044582  478496 command_runner.go:130] >       "repoTags": [
	I0103 20:16:09.044592  478496 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0103 20:16:09.044597  478496 command_runner.go:130] >       ],
	I0103 20:16:09.044602  478496 command_runner.go:130] >       "repoDigests": [
	I0103 20:16:09.044612  478496 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0103 20:16:09.044633  478496 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I0103 20:16:09.044639  478496 command_runner.go:130] >       ],
	I0103 20:16:09.044654  478496 command_runner.go:130] >       "size": "117252916",
	I0103 20:16:09.044660  478496 command_runner.go:130] >       "uid": {
	I0103 20:16:09.044673  478496 command_runner.go:130] >         "value": "0"
	I0103 20:16:09.044678  478496 command_runner.go:130] >       },
	I0103 20:16:09.044685  478496 command_runner.go:130] >       "username": "",
	I0103 20:16:09.044690  478496 command_runner.go:130] >       "spec": null,
	I0103 20:16:09.044698  478496 command_runner.go:130] >       "pinned": false
	I0103 20:16:09.044707  478496 command_runner.go:130] >     },
	I0103 20:16:09.044712  478496 command_runner.go:130] >     {
	I0103 20:16:09.044724  478496 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I0103 20:16:09.044729  478496 command_runner.go:130] >       "repoTags": [
	I0103 20:16:09.044738  478496 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0103 20:16:09.044747  478496 command_runner.go:130] >       ],
	I0103 20:16:09.044752  478496 command_runner.go:130] >       "repoDigests": [
	I0103 20:16:09.044761  478496 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I0103 20:16:09.044774  478496 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0103 20:16:09.044782  478496 command_runner.go:130] >       ],
	I0103 20:16:09.044790  478496 command_runner.go:130] >       "size": "69992343",
	I0103 20:16:09.044795  478496 command_runner.go:130] >       "uid": null,
	I0103 20:16:09.044803  478496 command_runner.go:130] >       "username": "",
	I0103 20:16:09.044808  478496 command_runner.go:130] >       "spec": null,
	I0103 20:16:09.044813  478496 command_runner.go:130] >       "pinned": false
	I0103 20:16:09.044817  478496 command_runner.go:130] >     },
	I0103 20:16:09.044823  478496 command_runner.go:130] >     {
	I0103 20:16:09.044832  478496 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I0103 20:16:09.044841  478496 command_runner.go:130] >       "repoTags": [
	I0103 20:16:09.044847  478496 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0103 20:16:09.044852  478496 command_runner.go:130] >       ],
	I0103 20:16:09.044857  478496 command_runner.go:130] >       "repoDigests": [
	I0103 20:16:09.044881  478496 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0103 20:16:09.044895  478496 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I0103 20:16:09.044901  478496 command_runner.go:130] >       ],
	I0103 20:16:09.044909  478496 command_runner.go:130] >       "size": "59253556",
	I0103 20:16:09.044914  478496 command_runner.go:130] >       "uid": {
	I0103 20:16:09.044919  478496 command_runner.go:130] >         "value": "0"
	I0103 20:16:09.044926  478496 command_runner.go:130] >       },
	I0103 20:16:09.044931  478496 command_runner.go:130] >       "username": "",
	I0103 20:16:09.044936  478496 command_runner.go:130] >       "spec": null,
	I0103 20:16:09.044942  478496 command_runner.go:130] >       "pinned": false
	I0103 20:16:09.044946  478496 command_runner.go:130] >     },
	I0103 20:16:09.044951  478496 command_runner.go:130] >     {
	I0103 20:16:09.044962  478496 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0103 20:16:09.044969  478496 command_runner.go:130] >       "repoTags": [
	I0103 20:16:09.044975  478496 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0103 20:16:09.044980  478496 command_runner.go:130] >       ],
	I0103 20:16:09.044988  478496 command_runner.go:130] >       "repoDigests": [
	I0103 20:16:09.044997  478496 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0103 20:16:09.045011  478496 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0103 20:16:09.045017  478496 command_runner.go:130] >       ],
	I0103 20:16:09.045030  478496 command_runner.go:130] >       "size": "520014",
	I0103 20:16:09.045036  478496 command_runner.go:130] >       "uid": {
	I0103 20:16:09.045044  478496 command_runner.go:130] >         "value": "65535"
	I0103 20:16:09.045055  478496 command_runner.go:130] >       },
	I0103 20:16:09.045060  478496 command_runner.go:130] >       "username": "",
	I0103 20:16:09.045074  478496 command_runner.go:130] >       "spec": null,
	I0103 20:16:09.045079  478496 command_runner.go:130] >       "pinned": false
	I0103 20:16:09.045086  478496 command_runner.go:130] >     }
	I0103 20:16:09.045091  478496 command_runner.go:130] >   ]
	I0103 20:16:09.045099  478496 command_runner.go:130] > }
	I0103 20:16:09.045303  478496 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 20:16:09.045317  478496 crio.go:415] Images already preloaded, skipping extraction
	I0103 20:16:09.045375  478496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:16:09.086501  478496 command_runner.go:130] > {
	I0103 20:16:09.086545  478496 command_runner.go:130] >   "images": [
	I0103 20:16:09.086552  478496 command_runner.go:130] >     {
	I0103 20:16:09.086562  478496 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I0103 20:16:09.086568  478496 command_runner.go:130] >       "repoTags": [
	I0103 20:16:09.086576  478496 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0103 20:16:09.086581  478496 command_runner.go:130] >       ],
	I0103 20:16:09.086586  478496 command_runner.go:130] >       "repoDigests": [
	I0103 20:16:09.086596  478496 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0103 20:16:09.086609  478496 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I0103 20:16:09.086621  478496 command_runner.go:130] >       ],
	I0103 20:16:09.086628  478496 command_runner.go:130] >       "size": "60867618",
	I0103 20:16:09.086633  478496 command_runner.go:130] >       "uid": null,
	I0103 20:16:09.086645  478496 command_runner.go:130] >       "username": "",
	I0103 20:16:09.086651  478496 command_runner.go:130] >       "spec": null,
	I0103 20:16:09.086656  478496 command_runner.go:130] >       "pinned": false
	I0103 20:16:09.086660  478496 command_runner.go:130] >     },
	I0103 20:16:09.086672  478496 command_runner.go:130] >     {
	I0103 20:16:09.086682  478496 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0103 20:16:09.086693  478496 command_runner.go:130] >       "repoTags": [
	I0103 20:16:09.086701  478496 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0103 20:16:09.086707  478496 command_runner.go:130] >       ],
	I0103 20:16:09.086713  478496 command_runner.go:130] >       "repoDigests": [
	I0103 20:16:09.086723  478496 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0103 20:16:09.086733  478496 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0103 20:16:09.086737  478496 command_runner.go:130] >       ],
	I0103 20:16:09.086748  478496 command_runner.go:130] >       "size": "29037500",
	I0103 20:16:09.086753  478496 command_runner.go:130] >       "uid": null,
	I0103 20:16:09.086757  478496 command_runner.go:130] >       "username": "",
	I0103 20:16:09.086762  478496 command_runner.go:130] >       "spec": null,
	I0103 20:16:09.086767  478496 command_runner.go:130] >       "pinned": false
	I0103 20:16:09.086772  478496 command_runner.go:130] >     },
	I0103 20:16:09.086776  478496 command_runner.go:130] >     {
	I0103 20:16:09.086784  478496 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0103 20:16:09.086788  478496 command_runner.go:130] >       "repoTags": [
	I0103 20:16:09.086795  478496 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0103 20:16:09.086800  478496 command_runner.go:130] >       ],
	I0103 20:16:09.086806  478496 command_runner.go:130] >       "repoDigests": [
	I0103 20:16:09.086816  478496 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0103 20:16:09.086828  478496 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0103 20:16:09.086834  478496 command_runner.go:130] >       ],
	I0103 20:16:09.086840  478496 command_runner.go:130] >       "size": "51393451",
	I0103 20:16:09.086844  478496 command_runner.go:130] >       "uid": null,
	I0103 20:16:09.086850  478496 command_runner.go:130] >       "username": "",
	I0103 20:16:09.086854  478496 command_runner.go:130] >       "spec": null,
	I0103 20:16:09.086861  478496 command_runner.go:130] >       "pinned": false
	I0103 20:16:09.086865  478496 command_runner.go:130] >     },
	I0103 20:16:09.086869  478496 command_runner.go:130] >     {
	I0103 20:16:09.086877  478496 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I0103 20:16:09.086882  478496 command_runner.go:130] >       "repoTags": [
	I0103 20:16:09.086888  478496 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0103 20:16:09.086892  478496 command_runner.go:130] >       ],
	I0103 20:16:09.086897  478496 command_runner.go:130] >       "repoDigests": [
	I0103 20:16:09.086906  478496 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I0103 20:16:09.086915  478496 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I0103 20:16:09.086926  478496 command_runner.go:130] >       ],
	I0103 20:16:09.086932  478496 command_runner.go:130] >       "size": "182203183",
	I0103 20:16:09.086936  478496 command_runner.go:130] >       "uid": {
	I0103 20:16:09.086941  478496 command_runner.go:130] >         "value": "0"
	I0103 20:16:09.086946  478496 command_runner.go:130] >       },
	I0103 20:16:09.086951  478496 command_runner.go:130] >       "username": "",
	I0103 20:16:09.086956  478496 command_runner.go:130] >       "spec": null,
	I0103 20:16:09.086960  478496 command_runner.go:130] >       "pinned": false
	I0103 20:16:09.086965  478496 command_runner.go:130] >     },
	I0103 20:16:09.086969  478496 command_runner.go:130] >     {
	I0103 20:16:09.086976  478496 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I0103 20:16:09.086983  478496 command_runner.go:130] >       "repoTags": [
	I0103 20:16:09.086989  478496 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0103 20:16:09.086994  478496 command_runner.go:130] >       ],
	I0103 20:16:09.087002  478496 command_runner.go:130] >       "repoDigests": [
	I0103 20:16:09.087011  478496 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I0103 20:16:09.087020  478496 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I0103 20:16:09.087025  478496 command_runner.go:130] >       ],
	I0103 20:16:09.087031  478496 command_runner.go:130] >       "size": "121119694",
	I0103 20:16:09.087036  478496 command_runner.go:130] >       "uid": {
	I0103 20:16:09.087041  478496 command_runner.go:130] >         "value": "0"
	I0103 20:16:09.087045  478496 command_runner.go:130] >       },
	I0103 20:16:09.087050  478496 command_runner.go:130] >       "username": "",
	I0103 20:16:09.087055  478496 command_runner.go:130] >       "spec": null,
	I0103 20:16:09.087060  478496 command_runner.go:130] >       "pinned": false
	I0103 20:16:09.087064  478496 command_runner.go:130] >     },
	I0103 20:16:09.087068  478496 command_runner.go:130] >     {
	I0103 20:16:09.087075  478496 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I0103 20:16:09.087081  478496 command_runner.go:130] >       "repoTags": [
	I0103 20:16:09.087087  478496 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0103 20:16:09.087092  478496 command_runner.go:130] >       ],
	I0103 20:16:09.087096  478496 command_runner.go:130] >       "repoDigests": [
	I0103 20:16:09.087106  478496 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0103 20:16:09.087115  478496 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I0103 20:16:09.087119  478496 command_runner.go:130] >       ],
	I0103 20:16:09.087125  478496 command_runner.go:130] >       "size": "117252916",
	I0103 20:16:09.087136  478496 command_runner.go:130] >       "uid": {
	I0103 20:16:09.087141  478496 command_runner.go:130] >         "value": "0"
	I0103 20:16:09.087145  478496 command_runner.go:130] >       },
	I0103 20:16:09.087150  478496 command_runner.go:130] >       "username": "",
	I0103 20:16:09.087155  478496 command_runner.go:130] >       "spec": null,
	I0103 20:16:09.087160  478496 command_runner.go:130] >       "pinned": false
	I0103 20:16:09.087164  478496 command_runner.go:130] >     },
	I0103 20:16:09.087168  478496 command_runner.go:130] >     {
	I0103 20:16:09.087176  478496 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I0103 20:16:09.087180  478496 command_runner.go:130] >       "repoTags": [
	I0103 20:16:09.087188  478496 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0103 20:16:09.087193  478496 command_runner.go:130] >       ],
	I0103 20:16:09.087198  478496 command_runner.go:130] >       "repoDigests": [
	I0103 20:16:09.087208  478496 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I0103 20:16:09.087217  478496 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0103 20:16:09.087239  478496 command_runner.go:130] >       ],
	I0103 20:16:09.087244  478496 command_runner.go:130] >       "size": "69992343",
	I0103 20:16:09.087249  478496 command_runner.go:130] >       "uid": null,
	I0103 20:16:09.087256  478496 command_runner.go:130] >       "username": "",
	I0103 20:16:09.087260  478496 command_runner.go:130] >       "spec": null,
	I0103 20:16:09.087265  478496 command_runner.go:130] >       "pinned": false
	I0103 20:16:09.087269  478496 command_runner.go:130] >     },
	I0103 20:16:09.087273  478496 command_runner.go:130] >     {
	I0103 20:16:09.087283  478496 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I0103 20:16:09.087287  478496 command_runner.go:130] >       "repoTags": [
	I0103 20:16:09.087294  478496 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0103 20:16:09.087298  478496 command_runner.go:130] >       ],
	I0103 20:16:09.087302  478496 command_runner.go:130] >       "repoDigests": [
	I0103 20:16:09.087322  478496 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0103 20:16:09.087331  478496 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I0103 20:16:09.087336  478496 command_runner.go:130] >       ],
	I0103 20:16:09.087341  478496 command_runner.go:130] >       "size": "59253556",
	I0103 20:16:09.087345  478496 command_runner.go:130] >       "uid": {
	I0103 20:16:09.087350  478496 command_runner.go:130] >         "value": "0"
	I0103 20:16:09.087354  478496 command_runner.go:130] >       },
	I0103 20:16:09.087359  478496 command_runner.go:130] >       "username": "",
	I0103 20:16:09.087366  478496 command_runner.go:130] >       "spec": null,
	I0103 20:16:09.087371  478496 command_runner.go:130] >       "pinned": false
	I0103 20:16:09.087375  478496 command_runner.go:130] >     },
	I0103 20:16:09.087380  478496 command_runner.go:130] >     {
	I0103 20:16:09.087388  478496 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0103 20:16:09.087392  478496 command_runner.go:130] >       "repoTags": [
	I0103 20:16:09.087398  478496 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0103 20:16:09.087402  478496 command_runner.go:130] >       ],
	I0103 20:16:09.087407  478496 command_runner.go:130] >       "repoDigests": [
	I0103 20:16:09.087416  478496 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0103 20:16:09.087425  478496 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0103 20:16:09.087429  478496 command_runner.go:130] >       ],
	I0103 20:16:09.087434  478496 command_runner.go:130] >       "size": "520014",
	I0103 20:16:09.087438  478496 command_runner.go:130] >       "uid": {
	I0103 20:16:09.087443  478496 command_runner.go:130] >         "value": "65535"
	I0103 20:16:09.087447  478496 command_runner.go:130] >       },
	I0103 20:16:09.087452  478496 command_runner.go:130] >       "username": "",
	I0103 20:16:09.087456  478496 command_runner.go:130] >       "spec": null,
	I0103 20:16:09.087463  478496 command_runner.go:130] >       "pinned": false
	I0103 20:16:09.087467  478496 command_runner.go:130] >     }
	I0103 20:16:09.087471  478496 command_runner.go:130] >   ]
	I0103 20:16:09.087475  478496 command_runner.go:130] > }
	I0103 20:16:09.089397  478496 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 20:16:09.089419  478496 cache_images.go:84] Images are preloaded, skipping loading
	I0103 20:16:09.089498  478496 ssh_runner.go:195] Run: crio config
	I0103 20:16:09.146909  478496 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0103 20:16:09.146941  478496 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0103 20:16:09.146951  478496 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0103 20:16:09.146956  478496 command_runner.go:130] > #
	I0103 20:16:09.146965  478496 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0103 20:16:09.146973  478496 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0103 20:16:09.146981  478496 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0103 20:16:09.146994  478496 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0103 20:16:09.146999  478496 command_runner.go:130] > # reload'.
	I0103 20:16:09.147011  478496 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0103 20:16:09.147019  478496 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0103 20:16:09.147026  478496 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0103 20:16:09.147033  478496 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0103 20:16:09.147037  478496 command_runner.go:130] > [crio]
	I0103 20:16:09.147045  478496 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0103 20:16:09.147052  478496 command_runner.go:130] > # containers images, in this directory.
	I0103 20:16:09.147062  478496 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0103 20:16:09.147070  478496 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0103 20:16:09.147076  478496 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0103 20:16:09.147084  478496 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0103 20:16:09.147092  478496 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0103 20:16:09.147375  478496 command_runner.go:130] > # storage_driver = "vfs"
	I0103 20:16:09.147415  478496 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0103 20:16:09.147442  478496 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0103 20:16:09.147447  478496 command_runner.go:130] > # storage_option = [
	I0103 20:16:09.147452  478496 command_runner.go:130] > # ]
	I0103 20:16:09.147459  478496 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0103 20:16:09.147469  478496 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0103 20:16:09.147671  478496 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0103 20:16:09.147684  478496 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0103 20:16:09.147697  478496 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0103 20:16:09.147703  478496 command_runner.go:130] > # always happen on a node reboot
	I0103 20:16:09.147709  478496 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0103 20:16:09.147716  478496 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0103 20:16:09.147728  478496 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0103 20:16:09.147739  478496 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0103 20:16:09.147748  478496 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0103 20:16:09.147757  478496 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0103 20:16:09.147766  478496 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0103 20:16:09.147771  478496 command_runner.go:130] > # internal_wipe = true
	I0103 20:16:09.147778  478496 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0103 20:16:09.147785  478496 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0103 20:16:09.147792  478496 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0103 20:16:09.147798  478496 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0103 20:16:09.147807  478496 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0103 20:16:09.147811  478496 command_runner.go:130] > [crio.api]
	I0103 20:16:09.147818  478496 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0103 20:16:09.147823  478496 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0103 20:16:09.147844  478496 command_runner.go:130] > # IP address on which the stream server will listen.
	I0103 20:16:09.147850  478496 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0103 20:16:09.147857  478496 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0103 20:16:09.147863  478496 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0103 20:16:09.147871  478496 command_runner.go:130] > # stream_port = "0"
	I0103 20:16:09.147878  478496 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0103 20:16:09.147883  478496 command_runner.go:130] > # stream_enable_tls = false
	I0103 20:16:09.147890  478496 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0103 20:16:09.147895  478496 command_runner.go:130] > # stream_idle_timeout = ""
	I0103 20:16:09.147905  478496 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0103 20:16:09.147913  478496 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0103 20:16:09.147917  478496 command_runner.go:130] > # minutes.
	I0103 20:16:09.147922  478496 command_runner.go:130] > # stream_tls_cert = ""
	I0103 20:16:09.147929  478496 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0103 20:16:09.147936  478496 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0103 20:16:09.147941  478496 command_runner.go:130] > # stream_tls_key = ""
	I0103 20:16:09.147948  478496 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0103 20:16:09.147955  478496 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0103 20:16:09.147963  478496 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0103 20:16:09.147968  478496 command_runner.go:130] > # stream_tls_ca = ""
	I0103 20:16:09.147977  478496 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0103 20:16:09.147982  478496 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0103 20:16:09.147992  478496 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0103 20:16:09.147998  478496 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0103 20:16:09.148014  478496 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0103 20:16:09.148021  478496 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0103 20:16:09.148026  478496 command_runner.go:130] > [crio.runtime]
	I0103 20:16:09.148032  478496 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0103 20:16:09.148039  478496 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0103 20:16:09.148044  478496 command_runner.go:130] > # "nofile=1024:2048"
	I0103 20:16:09.148051  478496 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0103 20:16:09.148056  478496 command_runner.go:130] > # default_ulimits = [
	I0103 20:16:09.148060  478496 command_runner.go:130] > # ]
	I0103 20:16:09.148067  478496 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0103 20:16:09.148073  478496 command_runner.go:130] > # no_pivot = false
	I0103 20:16:09.148080  478496 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0103 20:16:09.148087  478496 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0103 20:16:09.148093  478496 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0103 20:16:09.148099  478496 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0103 20:16:09.148105  478496 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0103 20:16:09.148115  478496 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0103 20:16:09.148120  478496 command_runner.go:130] > # conmon = ""
	I0103 20:16:09.148125  478496 command_runner.go:130] > # Cgroup setting for conmon
	I0103 20:16:09.148133  478496 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0103 20:16:09.148138  478496 command_runner.go:130] > conmon_cgroup = "pod"
	I0103 20:16:09.148146  478496 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0103 20:16:09.148152  478496 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0103 20:16:09.148160  478496 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0103 20:16:09.148164  478496 command_runner.go:130] > # conmon_env = [
	I0103 20:16:09.148168  478496 command_runner.go:130] > # ]
	I0103 20:16:09.148174  478496 command_runner.go:130] > # Additional environment variables to set for all the
	I0103 20:16:09.148180  478496 command_runner.go:130] > # containers. These are overridden if set in the
	I0103 20:16:09.148187  478496 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0103 20:16:09.148192  478496 command_runner.go:130] > # default_env = [
	I0103 20:16:09.148195  478496 command_runner.go:130] > # ]
	I0103 20:16:09.148202  478496 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0103 20:16:09.148208  478496 command_runner.go:130] > # selinux = false
	I0103 20:16:09.148215  478496 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0103 20:16:09.148225  478496 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0103 20:16:09.148232  478496 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0103 20:16:09.148237  478496 command_runner.go:130] > # seccomp_profile = ""
	I0103 20:16:09.148244  478496 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0103 20:16:09.148250  478496 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0103 20:16:09.148258  478496 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0103 20:16:09.148263  478496 command_runner.go:130] > # which might increase security.
	I0103 20:16:09.148268  478496 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0103 20:16:09.148276  478496 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0103 20:16:09.148283  478496 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0103 20:16:09.148291  478496 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0103 20:16:09.148299  478496 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0103 20:16:09.148305  478496 command_runner.go:130] > # This option supports live configuration reload.
	I0103 20:16:09.148310  478496 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0103 20:16:09.148319  478496 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0103 20:16:09.148324  478496 command_runner.go:130] > # the cgroup blockio controller.
	I0103 20:16:09.148329  478496 command_runner.go:130] > # blockio_config_file = ""
	I0103 20:16:09.148336  478496 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0103 20:16:09.148343  478496 command_runner.go:130] > # irqbalance daemon.
	I0103 20:16:09.148350  478496 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0103 20:16:09.148358  478496 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0103 20:16:09.148364  478496 command_runner.go:130] > # This option supports live configuration reload.
	I0103 20:16:09.148368  478496 command_runner.go:130] > # rdt_config_file = ""
	I0103 20:16:09.148375  478496 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0103 20:16:09.148380  478496 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0103 20:16:09.148387  478496 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0103 20:16:09.148392  478496 command_runner.go:130] > # separate_pull_cgroup = ""
	I0103 20:16:09.148400  478496 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0103 20:16:09.148407  478496 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0103 20:16:09.148412  478496 command_runner.go:130] > # will be added.
	I0103 20:16:09.148416  478496 command_runner.go:130] > # default_capabilities = [
	I0103 20:16:09.148681  478496 command_runner.go:130] > # 	"CHOWN",
	I0103 20:16:09.148726  478496 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0103 20:16:09.148746  478496 command_runner.go:130] > # 	"FSETID",
	I0103 20:16:09.148766  478496 command_runner.go:130] > # 	"FOWNER",
	I0103 20:16:09.148797  478496 command_runner.go:130] > # 	"SETGID",
	I0103 20:16:09.148823  478496 command_runner.go:130] > # 	"SETUID",
	I0103 20:16:09.148843  478496 command_runner.go:130] > # 	"SETPCAP",
	I0103 20:16:09.148864  478496 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0103 20:16:09.148882  478496 command_runner.go:130] > # 	"KILL",
	I0103 20:16:09.148908  478496 command_runner.go:130] > # ]
	I0103 20:16:09.148937  478496 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0103 20:16:09.148961  478496 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0103 20:16:09.148981  478496 command_runner.go:130] > # add_inheritable_capabilities = true
	I0103 20:16:09.149013  478496 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0103 20:16:09.149038  478496 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0103 20:16:09.149056  478496 command_runner.go:130] > # default_sysctls = [
	I0103 20:16:09.149074  478496 command_runner.go:130] > # ]
	I0103 20:16:09.149093  478496 command_runner.go:130] > # List of devices on the host that a
	I0103 20:16:09.149123  478496 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0103 20:16:09.149163  478496 command_runner.go:130] > # allowed_devices = [
	I0103 20:16:09.149183  478496 command_runner.go:130] > # 	"/dev/fuse",
	I0103 20:16:09.149201  478496 command_runner.go:130] > # ]
	I0103 20:16:09.149231  478496 command_runner.go:130] > # List of additional devices. specified as
	I0103 20:16:09.149278  478496 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0103 20:16:09.149300  478496 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0103 20:16:09.149332  478496 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0103 20:16:09.149358  478496 command_runner.go:130] > # additional_devices = [
	I0103 20:16:09.149376  478496 command_runner.go:130] > # ]
	I0103 20:16:09.149396  478496 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0103 20:16:09.149415  478496 command_runner.go:130] > # cdi_spec_dirs = [
	I0103 20:16:09.149439  478496 command_runner.go:130] > # 	"/etc/cdi",
	I0103 20:16:09.149468  478496 command_runner.go:130] > # 	"/var/run/cdi",
	I0103 20:16:09.149486  478496 command_runner.go:130] > # ]
	I0103 20:16:09.149506  478496 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0103 20:16:09.149538  478496 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0103 20:16:09.149559  478496 command_runner.go:130] > # Defaults to false.
	I0103 20:16:09.149579  478496 command_runner.go:130] > # device_ownership_from_security_context = false
	I0103 20:16:09.149603  478496 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0103 20:16:09.149634  478496 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0103 20:16:09.149654  478496 command_runner.go:130] > # hooks_dir = [
	I0103 20:16:09.149673  478496 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0103 20:16:09.149691  478496 command_runner.go:130] > # ]
	I0103 20:16:09.149712  478496 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0103 20:16:09.149742  478496 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0103 20:16:09.149767  478496 command_runner.go:130] > # its default mounts from the following two files:
	I0103 20:16:09.149786  478496 command_runner.go:130] > #
	I0103 20:16:09.149807  478496 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0103 20:16:09.149838  478496 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0103 20:16:09.149865  478496 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0103 20:16:09.149881  478496 command_runner.go:130] > #
	I0103 20:16:09.149903  478496 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0103 20:16:09.149933  478496 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0103 20:16:09.149955  478496 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0103 20:16:09.149973  478496 command_runner.go:130] > #      only add mounts it finds in this file.
	I0103 20:16:09.149992  478496 command_runner.go:130] > #
	I0103 20:16:09.150014  478496 command_runner.go:130] > # default_mounts_file = ""
	I0103 20:16:09.150049  478496 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0103 20:16:09.150077  478496 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0103 20:16:09.150097  478496 command_runner.go:130] > # pids_limit = 0
	I0103 20:16:09.150120  478496 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0103 20:16:09.150150  478496 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0103 20:16:09.150173  478496 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0103 20:16:09.150196  478496 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0103 20:16:09.150215  478496 command_runner.go:130] > # log_size_max = -1
	I0103 20:16:09.150246  478496 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0103 20:16:09.150274  478496 command_runner.go:130] > # log_to_journald = false
	I0103 20:16:09.150294  478496 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0103 20:16:09.150313  478496 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0103 20:16:09.150331  478496 command_runner.go:130] > # Path to directory for container attach sockets.
	I0103 20:16:09.150365  478496 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0103 20:16:09.150384  478496 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0103 20:16:09.150401  478496 command_runner.go:130] > # bind_mount_prefix = ""
	I0103 20:16:09.150422  478496 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0103 20:16:09.150454  478496 command_runner.go:130] > # read_only = false
	I0103 20:16:09.150480  478496 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0103 20:16:09.150502  478496 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0103 20:16:09.150540  478496 command_runner.go:130] > # live configuration reload.
	I0103 20:16:09.150565  478496 command_runner.go:130] > # log_level = "info"
	I0103 20:16:09.150585  478496 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0103 20:16:09.150605  478496 command_runner.go:130] > # This option supports live configuration reload.
	I0103 20:16:09.150623  478496 command_runner.go:130] > # log_filter = ""
	I0103 20:16:09.150651  478496 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0103 20:16:09.150678  478496 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0103 20:16:09.150697  478496 command_runner.go:130] > # separated by comma.
	I0103 20:16:09.150715  478496 command_runner.go:130] > # uid_mappings = ""
	I0103 20:16:09.150735  478496 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0103 20:16:09.150764  478496 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0103 20:16:09.150788  478496 command_runner.go:130] > # separated by comma.
	I0103 20:16:09.150806  478496 command_runner.go:130] > # gid_mappings = ""
	I0103 20:16:09.150830  478496 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0103 20:16:09.150851  478496 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0103 20:16:09.150892  478496 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0103 20:16:09.150911  478496 command_runner.go:130] > # minimum_mappable_uid = -1
	I0103 20:16:09.150930  478496 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0103 20:16:09.150959  478496 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0103 20:16:09.150982  478496 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0103 20:16:09.151000  478496 command_runner.go:130] > # minimum_mappable_gid = -1
	I0103 20:16:09.151021  478496 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0103 20:16:09.151043  478496 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0103 20:16:09.151076  478496 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0103 20:16:09.151095  478496 command_runner.go:130] > # ctr_stop_timeout = 30
	I0103 20:16:09.151117  478496 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0103 20:16:09.151149  478496 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0103 20:16:09.151176  478496 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0103 20:16:09.151197  478496 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0103 20:16:09.151228  478496 command_runner.go:130] > # drop_infra_ctr = true
	I0103 20:16:09.151258  478496 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0103 20:16:09.151284  478496 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0103 20:16:09.151307  478496 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0103 20:16:09.151327  478496 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0103 20:16:09.151358  478496 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0103 20:16:09.151381  478496 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0103 20:16:09.151398  478496 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0103 20:16:09.151420  478496 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0103 20:16:09.151437  478496 command_runner.go:130] > # pinns_path = ""
	I0103 20:16:09.151467  478496 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0103 20:16:09.151504  478496 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0103 20:16:09.151526  478496 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0103 20:16:09.151544  478496 command_runner.go:130] > # default_runtime = "runc"
	I0103 20:16:09.151573  478496 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0103 20:16:09.151597  478496 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0103 20:16:09.151622  478496 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0103 20:16:09.151641  478496 command_runner.go:130] > # creation as a file is not desired either.
	I0103 20:16:09.151676  478496 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0103 20:16:09.151699  478496 command_runner.go:130] > # the hostname is being managed dynamically.
	I0103 20:16:09.151717  478496 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0103 20:16:09.151735  478496 command_runner.go:130] > # ]
	I0103 20:16:09.151775  478496 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0103 20:16:09.151801  478496 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0103 20:16:09.151823  478496 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0103 20:16:09.151844  478496 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0103 20:16:09.151861  478496 command_runner.go:130] > #
	I0103 20:16:09.151893  478496 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0103 20:16:09.151911  478496 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0103 20:16:09.151929  478496 command_runner.go:130] > #  runtime_type = "oci"
	I0103 20:16:09.151948  478496 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0103 20:16:09.151975  478496 command_runner.go:130] > #  privileged_without_host_devices = false
	I0103 20:16:09.151998  478496 command_runner.go:130] > #  allowed_annotations = []
	I0103 20:16:09.152017  478496 command_runner.go:130] > # Where:
	I0103 20:16:09.152037  478496 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0103 20:16:09.152073  478496 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0103 20:16:09.154078  478496 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0103 20:16:09.154118  478496 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0103 20:16:09.154135  478496 command_runner.go:130] > #   in $PATH.
	I0103 20:16:09.154166  478496 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0103 20:16:09.154202  478496 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0103 20:16:09.154227  478496 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0103 20:16:09.154245  478496 command_runner.go:130] > #   state.
	I0103 20:16:09.154282  478496 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0103 20:16:09.154315  478496 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0103 20:16:09.154338  478496 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0103 20:16:09.154358  478496 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0103 20:16:09.154390  478496 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0103 20:16:09.154415  478496 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0103 20:16:09.154435  478496 command_runner.go:130] > #   The currently recognized values are:
	I0103 20:16:09.154467  478496 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0103 20:16:09.154493  478496 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0103 20:16:09.154534  478496 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0103 20:16:09.154568  478496 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0103 20:16:09.154583  478496 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0103 20:16:09.154596  478496 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0103 20:16:09.154604  478496 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0103 20:16:09.154615  478496 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0103 20:16:09.154622  478496 command_runner.go:130] > #   should be moved to the container's cgroup
	I0103 20:16:09.154637  478496 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0103 20:16:09.154644  478496 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0103 20:16:09.154650  478496 command_runner.go:130] > runtime_type = "oci"
	I0103 20:16:09.154658  478496 command_runner.go:130] > runtime_root = "/run/runc"
	I0103 20:16:09.154666  478496 command_runner.go:130] > runtime_config_path = ""
	I0103 20:16:09.154671  478496 command_runner.go:130] > monitor_path = ""
	I0103 20:16:09.154678  478496 command_runner.go:130] > monitor_cgroup = ""
	I0103 20:16:09.154684  478496 command_runner.go:130] > monitor_exec_cgroup = ""
	I0103 20:16:09.154722  478496 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0103 20:16:09.154732  478496 command_runner.go:130] > # running containers
	I0103 20:16:09.154737  478496 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0103 20:16:09.154745  478496 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0103 20:16:09.154755  478496 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0103 20:16:09.154765  478496 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0103 20:16:09.154772  478496 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0103 20:16:09.154780  478496 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0103 20:16:09.154788  478496 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0103 20:16:09.154795  478496 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0103 20:16:09.154802  478496 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0103 20:16:09.154808  478496 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0103 20:16:09.154815  478496 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0103 20:16:09.154824  478496 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0103 20:16:09.154839  478496 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0103 20:16:09.154848  478496 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0103 20:16:09.154861  478496 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0103 20:16:09.154868  478496 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0103 20:16:09.154882  478496 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0103 20:16:09.154892  478496 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0103 20:16:09.154899  478496 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0103 20:16:09.154908  478496 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0103 20:16:09.154915  478496 command_runner.go:130] > # Example:
	I0103 20:16:09.154920  478496 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0103 20:16:09.154926  478496 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0103 20:16:09.154935  478496 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0103 20:16:09.154941  478496 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0103 20:16:09.154948  478496 command_runner.go:130] > # cpuset = 0
	I0103 20:16:09.154955  478496 command_runner.go:130] > # cpushares = "0-1"
	I0103 20:16:09.154959  478496 command_runner.go:130] > # Where:
	I0103 20:16:09.154965  478496 command_runner.go:130] > # The workload name is workload-type.
	I0103 20:16:09.154973  478496 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0103 20:16:09.154982  478496 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0103 20:16:09.154990  478496 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0103 20:16:09.154999  478496 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0103 20:16:09.155009  478496 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0103 20:16:09.155014  478496 command_runner.go:130] > # 
	I0103 20:16:09.155024  478496 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0103 20:16:09.155030  478496 command_runner.go:130] > #
	I0103 20:16:09.155040  478496 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0103 20:16:09.155047  478496 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0103 20:16:09.155055  478496 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0103 20:16:09.155065  478496 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0103 20:16:09.155074  478496 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0103 20:16:09.155079  478496 command_runner.go:130] > [crio.image]
	I0103 20:16:09.155087  478496 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0103 20:16:09.155097  478496 command_runner.go:130] > # default_transport = "docker://"
	I0103 20:16:09.155105  478496 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0103 20:16:09.155116  478496 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0103 20:16:09.155121  478496 command_runner.go:130] > # global_auth_file = ""
	I0103 20:16:09.155128  478496 command_runner.go:130] > # The image used to instantiate infra containers.
	I0103 20:16:09.155136  478496 command_runner.go:130] > # This option supports live configuration reload.
	I0103 20:16:09.155142  478496 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0103 20:16:09.155152  478496 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0103 20:16:09.155160  478496 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0103 20:16:09.155169  478496 command_runner.go:130] > # This option supports live configuration reload.
	I0103 20:16:09.155175  478496 command_runner.go:130] > # pause_image_auth_file = ""
	I0103 20:16:09.155184  478496 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0103 20:16:09.155196  478496 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0103 20:16:09.155204  478496 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0103 20:16:09.155211  478496 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0103 20:16:09.155217  478496 command_runner.go:130] > # pause_command = "/pause"
	I0103 20:16:09.155231  478496 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0103 20:16:09.155242  478496 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0103 20:16:09.155254  478496 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0103 20:16:09.155264  478496 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0103 20:16:09.155271  478496 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0103 20:16:09.155278  478496 command_runner.go:130] > # signature_policy = ""
	I0103 20:16:09.155286  478496 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0103 20:16:09.155293  478496 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0103 20:16:09.155298  478496 command_runner.go:130] > # changing them here.
	I0103 20:16:09.155303  478496 command_runner.go:130] > # insecure_registries = [
	I0103 20:16:09.155307  478496 command_runner.go:130] > # ]
	I0103 20:16:09.155323  478496 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0103 20:16:09.155332  478496 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0103 20:16:09.155347  478496 command_runner.go:130] > # image_volumes = "mkdir"
	I0103 20:16:09.155353  478496 command_runner.go:130] > # Temporary directory to use for storing big files
	I0103 20:16:09.155362  478496 command_runner.go:130] > # big_files_temporary_dir = ""
	I0103 20:16:09.155376  478496 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0103 20:16:09.155382  478496 command_runner.go:130] > # CNI plugins.
	I0103 20:16:09.155386  478496 command_runner.go:130] > [crio.network]
	I0103 20:16:09.155397  478496 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0103 20:16:09.155407  478496 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0103 20:16:09.155418  478496 command_runner.go:130] > # cni_default_network = ""
	I0103 20:16:09.155425  478496 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0103 20:16:09.155430  478496 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0103 20:16:09.155440  478496 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0103 20:16:09.155444  478496 command_runner.go:130] > # plugin_dirs = [
	I0103 20:16:09.155456  478496 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0103 20:16:09.155460  478496 command_runner.go:130] > # ]
	I0103 20:16:09.155467  478496 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0103 20:16:09.155472  478496 command_runner.go:130] > [crio.metrics]
	I0103 20:16:09.155478  478496 command_runner.go:130] > # Globally enable or disable metrics support.
	I0103 20:16:09.155483  478496 command_runner.go:130] > # enable_metrics = false
	I0103 20:16:09.155489  478496 command_runner.go:130] > # Specify enabled metrics collectors.
	I0103 20:16:09.155503  478496 command_runner.go:130] > # Per default all metrics are enabled.
	I0103 20:16:09.155510  478496 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0103 20:16:09.155518  478496 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0103 20:16:09.155529  478496 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0103 20:16:09.155534  478496 command_runner.go:130] > # metrics_collectors = [
	I0103 20:16:09.155545  478496 command_runner.go:130] > # 	"operations",
	I0103 20:16:09.155551  478496 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0103 20:16:09.155556  478496 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0103 20:16:09.155562  478496 command_runner.go:130] > # 	"operations_errors",
	I0103 20:16:09.155567  478496 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0103 20:16:09.155572  478496 command_runner.go:130] > # 	"image_pulls_by_name",
	I0103 20:16:09.155580  478496 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0103 20:16:09.155585  478496 command_runner.go:130] > # 	"image_pulls_failures",
	I0103 20:16:09.155593  478496 command_runner.go:130] > # 	"image_pulls_successes",
	I0103 20:16:09.155598  478496 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0103 20:16:09.155603  478496 command_runner.go:130] > # 	"image_layer_reuse",
	I0103 20:16:09.155608  478496 command_runner.go:130] > # 	"containers_oom_total",
	I0103 20:16:09.155615  478496 command_runner.go:130] > # 	"containers_oom",
	I0103 20:16:09.155620  478496 command_runner.go:130] > # 	"processes_defunct",
	I0103 20:16:09.155625  478496 command_runner.go:130] > # 	"operations_total",
	I0103 20:16:09.155631  478496 command_runner.go:130] > # 	"operations_latency_seconds",
	I0103 20:16:09.155639  478496 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0103 20:16:09.155644  478496 command_runner.go:130] > # 	"operations_errors_total",
	I0103 20:16:09.155651  478496 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0103 20:16:09.155663  478496 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0103 20:16:09.155670  478496 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0103 20:16:09.155676  478496 command_runner.go:130] > # 	"image_pulls_success_total",
	I0103 20:16:09.155684  478496 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0103 20:16:09.155689  478496 command_runner.go:130] > # 	"containers_oom_count_total",
	I0103 20:16:09.155695  478496 command_runner.go:130] > # ]
	I0103 20:16:09.155703  478496 command_runner.go:130] > # The port on which the metrics server will listen.
	I0103 20:16:09.155708  478496 command_runner.go:130] > # metrics_port = 9090
	I0103 20:16:09.155714  478496 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0103 20:16:09.155719  478496 command_runner.go:130] > # metrics_socket = ""
	I0103 20:16:09.155726  478496 command_runner.go:130] > # The certificate for the secure metrics server.
	I0103 20:16:09.155734  478496 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0103 20:16:09.155744  478496 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0103 20:16:09.155749  478496 command_runner.go:130] > # certificate on any modification event.
	I0103 20:16:09.155756  478496 command_runner.go:130] > # metrics_cert = ""
	I0103 20:16:09.155763  478496 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0103 20:16:09.155772  478496 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0103 20:16:09.155780  478496 command_runner.go:130] > # metrics_key = ""
	I0103 20:16:09.155789  478496 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0103 20:16:09.155794  478496 command_runner.go:130] > [crio.tracing]
	I0103 20:16:09.155800  478496 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0103 20:16:09.155805  478496 command_runner.go:130] > # enable_tracing = false
	I0103 20:16:09.155812  478496 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0103 20:16:09.155820  478496 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0103 20:16:09.155827  478496 command_runner.go:130] > # Number of samples to collect per million spans.
	I0103 20:16:09.155837  478496 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0103 20:16:09.155844  478496 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0103 20:16:09.155851  478496 command_runner.go:130] > [crio.stats]
	I0103 20:16:09.155858  478496 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0103 20:16:09.155865  478496 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0103 20:16:09.155874  478496 command_runner.go:130] > # stats_collection_period = 0
	I0103 20:16:09.155905  478496 command_runner.go:130] ! time="2024-01-03 20:16:09.141752296Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0103 20:16:09.155920  478496 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0103 20:16:09.155996  478496 cni.go:84] Creating CNI manager for ""
	I0103 20:16:09.156009  478496 cni.go:136] 1 nodes found, recommending kindnet
	I0103 20:16:09.156040  478496 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 20:16:09.156061  478496 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-004925 NodeName:multinode-004925 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 20:16:09.156204  478496 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-004925"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:16:09.156270  478496 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-004925 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-004925 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 20:16:09.156340  478496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 20:16:09.167379  478496 command_runner.go:130] > kubeadm
	I0103 20:16:09.167399  478496 command_runner.go:130] > kubectl
	I0103 20:16:09.167405  478496 command_runner.go:130] > kubelet
	I0103 20:16:09.167423  478496 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:16:09.167482  478496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:16:09.178409  478496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0103 20:16:09.201149  478496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 20:16:09.222931  478496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0103 20:16:09.244349  478496 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0103 20:16:09.248687  478496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:16:09.261726  478496 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925 for IP: 192.168.58.2
	I0103 20:16:09.261759  478496 certs.go:190] acquiring lock for shared ca certs: {Name:mk7a87d13d39d2defe5d349d371b78fa1f1e95bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:16:09.261896  478496 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.key
	I0103 20:16:09.261958  478496 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.key
	I0103 20:16:09.262008  478496 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/client.key
	I0103 20:16:09.262022  478496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/client.crt with IP's: []
	I0103 20:16:09.503519  478496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/client.crt ...
	I0103 20:16:09.503551  478496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/client.crt: {Name:mk14ae91cd7d17c81fb8d6e23a996f6b5c5dd58f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:16:09.503752  478496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/client.key ...
	I0103 20:16:09.503765  478496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/client.key: {Name:mk62b21b19e74ede09f66f1821701ae6bdc35c8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:16:09.503855  478496 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/apiserver.key.cee25041
	I0103 20:16:09.503871  478496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0103 20:16:09.793304  478496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/apiserver.crt.cee25041 ...
	I0103 20:16:09.793334  478496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/apiserver.crt.cee25041: {Name:mk021e2b8a9a9913ab5543546f0bf4702c7de306 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:16:09.793528  478496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/apiserver.key.cee25041 ...
	I0103 20:16:09.793542  478496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/apiserver.key.cee25041: {Name:mk2b1f37464f5cfa73db948bbfaa5bdb9dd36fe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:16:09.793647  478496 certs.go:337] copying /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/apiserver.crt
	I0103 20:16:09.793728  478496 certs.go:341] copying /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/apiserver.key
	I0103 20:16:09.793792  478496 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/proxy-client.key
	I0103 20:16:09.793808  478496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/proxy-client.crt with IP's: []
	I0103 20:16:10.264750  478496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/proxy-client.crt ...
	I0103 20:16:10.264788  478496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/proxy-client.crt: {Name:mk93b506047b578ba526b18859d272938f34e72c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:16:10.264982  478496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/proxy-client.key ...
	I0103 20:16:10.264999  478496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/proxy-client.key: {Name:mk066b026ea311ec93e0f4eb5c74a0eb204ec365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:16:10.265088  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0103 20:16:10.265112  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0103 20:16:10.265125  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0103 20:16:10.265139  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0103 20:16:10.265151  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0103 20:16:10.265166  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0103 20:16:10.265181  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0103 20:16:10.265196  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0103 20:16:10.265276  478496 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/414763.pem (1338 bytes)
	W0103 20:16:10.265324  478496 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/414763_empty.pem, impossibly tiny 0 bytes
	I0103 20:16:10.265346  478496 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem (1679 bytes)
	I0103 20:16:10.265382  478496 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:16:10.265408  478496 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:16:10.265437  478496 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem (1679 bytes)
	I0103 20:16:10.265496  478496 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem (1708 bytes)
	I0103 20:16:10.265533  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:16:10.265551  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/414763.pem -> /usr/share/ca-certificates/414763.pem
	I0103 20:16:10.265563  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem -> /usr/share/ca-certificates/4147632.pem
	I0103 20:16:10.266184  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:16:10.297248  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0103 20:16:10.327568  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:16:10.357006  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 20:16:10.386846  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:16:10.417551  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:16:10.446888  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:16:10.477032  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:16:10.508253  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:16:10.538716  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/certs/414763.pem --> /usr/share/ca-certificates/414763.pem (1338 bytes)
	I0103 20:16:10.569609  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem --> /usr/share/ca-certificates/4147632.pem (1708 bytes)
	I0103 20:16:10.599870  478496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:16:10.622240  478496 ssh_runner.go:195] Run: openssl version
	I0103 20:16:10.631077  478496 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0103 20:16:10.631482  478496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:16:10.644298  478496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:16:10.649109  478496 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  3 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:16:10.649150  478496 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:16:10.649236  478496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:16:10.657623  478496 command_runner.go:130] > b5213941
	I0103 20:16:10.658042  478496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:16:10.670322  478496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/414763.pem && ln -fs /usr/share/ca-certificates/414763.pem /etc/ssl/certs/414763.pem"
	I0103 20:16:10.682785  478496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/414763.pem
	I0103 20:16:10.687617  478496 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  3 20:01 /usr/share/ca-certificates/414763.pem
	I0103 20:16:10.687711  478496 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 20:01 /usr/share/ca-certificates/414763.pem
	I0103 20:16:10.687793  478496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/414763.pem
	I0103 20:16:10.696095  478496 command_runner.go:130] > 51391683
	I0103 20:16:10.696496  478496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/414763.pem /etc/ssl/certs/51391683.0"
	I0103 20:16:10.708197  478496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4147632.pem && ln -fs /usr/share/ca-certificates/4147632.pem /etc/ssl/certs/4147632.pem"
	I0103 20:16:10.719957  478496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4147632.pem
	I0103 20:16:10.724747  478496 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  3 20:01 /usr/share/ca-certificates/4147632.pem
	I0103 20:16:10.724836  478496 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 20:01 /usr/share/ca-certificates/4147632.pem
	I0103 20:16:10.724906  478496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4147632.pem
	I0103 20:16:10.733341  478496 command_runner.go:130] > 3ec20f2e
	I0103 20:16:10.733785  478496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4147632.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:16:10.745647  478496 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:16:10.751162  478496 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 20:16:10.751205  478496 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 20:16:10.751273  478496 kubeadm.go:404] StartCluster: {Name:multinode-004925 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-004925 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:16:10.751363  478496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:16:10.751425  478496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:16:10.792998  478496 cri.go:89] found id: ""
	I0103 20:16:10.793066  478496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:16:10.802461  478496 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0103 20:16:10.802488  478496 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0103 20:16:10.802497  478496 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0103 20:16:10.803872  478496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:16:10.814738  478496 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0103 20:16:10.814817  478496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:16:10.825459  478496 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0103 20:16:10.825488  478496 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0103 20:16:10.825499  478496 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0103 20:16:10.825507  478496 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:16:10.825541  478496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:16:10.825583  478496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0103 20:16:10.882321  478496 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0103 20:16:10.882350  478496 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0103 20:16:10.882567  478496 kubeadm.go:322] [preflight] Running pre-flight checks
	I0103 20:16:10.882584  478496 command_runner.go:130] > [preflight] Running pre-flight checks
	I0103 20:16:10.934269  478496 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0103 20:16:10.934300  478496 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0103 20:16:10.934353  478496 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0103 20:16:10.934371  478496 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1051-aws
	I0103 20:16:10.934404  478496 kubeadm.go:322] OS: Linux
	I0103 20:16:10.934412  478496 command_runner.go:130] > OS: Linux
	I0103 20:16:10.934455  478496 kubeadm.go:322] CGROUPS_CPU: enabled
	I0103 20:16:10.934464  478496 command_runner.go:130] > CGROUPS_CPU: enabled
	I0103 20:16:10.934508  478496 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0103 20:16:10.934537  478496 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0103 20:16:10.934582  478496 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0103 20:16:10.934587  478496 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0103 20:16:10.934631  478496 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0103 20:16:10.934641  478496 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0103 20:16:10.934685  478496 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0103 20:16:10.934695  478496 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0103 20:16:10.934739  478496 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0103 20:16:10.934748  478496 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0103 20:16:10.934791  478496 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0103 20:16:10.934801  478496 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0103 20:16:10.934846  478496 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0103 20:16:10.934863  478496 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0103 20:16:10.934911  478496 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0103 20:16:10.934922  478496 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0103 20:16:11.022164  478496 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 20:16:11.022195  478496 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 20:16:11.022296  478496 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 20:16:11.022307  478496 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 20:16:11.022393  478496 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 20:16:11.022402  478496 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 20:16:11.285292  478496 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 20:16:11.289761  478496 out.go:204]   - Generating certificates and keys ...
	I0103 20:16:11.285551  478496 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0103 20:16:11.289893  478496 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0103 20:16:11.289911  478496 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0103 20:16:11.289984  478496 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0103 20:16:11.289995  478496 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0103 20:16:11.665625  478496 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0103 20:16:11.665657  478496 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0103 20:16:12.059095  478496 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0103 20:16:12.059172  478496 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0103 20:16:12.720978  478496 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0103 20:16:12.721008  478496 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0103 20:16:13.169132  478496 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0103 20:16:13.169213  478496 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0103 20:16:13.594102  478496 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0103 20:16:13.594138  478496 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0103 20:16:13.594465  478496 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-004925] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0103 20:16:13.594480  478496 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-004925] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0103 20:16:15.457609  478496 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0103 20:16:15.457647  478496 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0103 20:16:15.457839  478496 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-004925] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0103 20:16:15.457861  478496 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-004925] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0103 20:16:16.014223  478496 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0103 20:16:16.014257  478496 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0103 20:16:16.430904  478496 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0103 20:16:16.430939  478496 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0103 20:16:16.807696  478496 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0103 20:16:16.807730  478496 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0103 20:16:16.808003  478496 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 20:16:16.808029  478496 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0103 20:16:17.275633  478496 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 20:16:17.275662  478496 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0103 20:16:17.734307  478496 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 20:16:17.734357  478496 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0103 20:16:18.218799  478496 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 20:16:18.218826  478496 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0103 20:16:18.779676  478496 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 20:16:18.779705  478496 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0103 20:16:18.780756  478496 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 20:16:18.780777  478496 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0103 20:16:18.784413  478496 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 20:16:18.786958  478496 out.go:204]   - Booting up control plane ...
	I0103 20:16:18.784508  478496 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0103 20:16:18.787065  478496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 20:16:18.787081  478496 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0103 20:16:18.787152  478496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 20:16:18.787162  478496 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0103 20:16:18.788058  478496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 20:16:18.788075  478496 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0103 20:16:18.799270  478496 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 20:16:18.799298  478496 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 20:16:18.800250  478496 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 20:16:18.800269  478496 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 20:16:18.800490  478496 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0103 20:16:18.800506  478496 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0103 20:16:18.904771  478496 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0103 20:16:18.904801  478496 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0103 20:16:26.909129  478496 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004384 seconds
	I0103 20:16:26.909161  478496 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.004384 seconds
	I0103 20:16:26.909268  478496 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0103 20:16:26.909292  478496 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0103 20:16:26.930644  478496 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0103 20:16:26.930683  478496 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0103 20:16:27.458844  478496 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0103 20:16:27.458868  478496 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0103 20:16:27.459040  478496 kubeadm.go:322] [mark-control-plane] Marking the node multinode-004925 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0103 20:16:27.459047  478496 command_runner.go:130] > [mark-control-plane] Marking the node multinode-004925 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0103 20:16:27.972027  478496 kubeadm.go:322] [bootstrap-token] Using token: 9445k4.ylfxx9ie9apygr6f
	I0103 20:16:27.973956  478496 out.go:204]   - Configuring RBAC rules ...
	I0103 20:16:27.972114  478496 command_runner.go:130] > [bootstrap-token] Using token: 9445k4.ylfxx9ie9apygr6f
	I0103 20:16:27.974082  478496 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0103 20:16:27.974097  478496 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0103 20:16:27.980825  478496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0103 20:16:27.980851  478496 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0103 20:16:27.989437  478496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0103 20:16:27.989461  478496 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0103 20:16:27.993855  478496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0103 20:16:27.993882  478496 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0103 20:16:27.998263  478496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0103 20:16:27.998296  478496 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0103 20:16:28.005875  478496 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0103 20:16:28.005905  478496 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0103 20:16:28.019795  478496 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0103 20:16:28.019817  478496 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0103 20:16:28.254653  478496 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0103 20:16:28.254675  478496 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0103 20:16:28.395899  478496 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0103 20:16:28.395921  478496 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0103 20:16:28.395927  478496 kubeadm.go:322] 
	I0103 20:16:28.395984  478496 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0103 20:16:28.395989  478496 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0103 20:16:28.395993  478496 kubeadm.go:322] 
	I0103 20:16:28.396065  478496 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0103 20:16:28.396080  478496 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0103 20:16:28.396084  478496 kubeadm.go:322] 
	I0103 20:16:28.396108  478496 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0103 20:16:28.396113  478496 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0103 20:16:28.396169  478496 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0103 20:16:28.396173  478496 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0103 20:16:28.396220  478496 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0103 20:16:28.396224  478496 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0103 20:16:28.396228  478496 kubeadm.go:322] 
	I0103 20:16:28.396279  478496 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0103 20:16:28.396284  478496 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0103 20:16:28.396288  478496 kubeadm.go:322] 
	I0103 20:16:28.396332  478496 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0103 20:16:28.396344  478496 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0103 20:16:28.396349  478496 kubeadm.go:322] 
	I0103 20:16:28.396398  478496 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0103 20:16:28.396403  478496 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0103 20:16:28.396472  478496 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0103 20:16:28.396480  478496 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0103 20:16:28.396543  478496 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0103 20:16:28.396548  478496 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0103 20:16:28.396552  478496 kubeadm.go:322] 
	I0103 20:16:28.396631  478496 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0103 20:16:28.396635  478496 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0103 20:16:28.396707  478496 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0103 20:16:28.396711  478496 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0103 20:16:28.396715  478496 kubeadm.go:322] 
	I0103 20:16:28.396794  478496 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 9445k4.ylfxx9ie9apygr6f \
	I0103 20:16:28.396810  478496 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 9445k4.ylfxx9ie9apygr6f \
	I0103 20:16:28.396908  478496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:497121a93982227783f08bfd7c063fc9f8a8d85d0f5f85a6107fb52aadafec60 \
	I0103 20:16:28.396913  478496 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:497121a93982227783f08bfd7c063fc9f8a8d85d0f5f85a6107fb52aadafec60 \
	I0103 20:16:28.396932  478496 kubeadm.go:322] 	--control-plane 
	I0103 20:16:28.396936  478496 command_runner.go:130] > 	--control-plane 
	I0103 20:16:28.396940  478496 kubeadm.go:322] 
	I0103 20:16:28.397019  478496 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0103 20:16:28.397026  478496 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0103 20:16:28.397032  478496 kubeadm.go:322] 
	I0103 20:16:28.397117  478496 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 9445k4.ylfxx9ie9apygr6f \
	I0103 20:16:28.397122  478496 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 9445k4.ylfxx9ie9apygr6f \
	I0103 20:16:28.397216  478496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:497121a93982227783f08bfd7c063fc9f8a8d85d0f5f85a6107fb52aadafec60 
	I0103 20:16:28.397221  478496 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:497121a93982227783f08bfd7c063fc9f8a8d85d0f5f85a6107fb52aadafec60 
	I0103 20:16:28.401345  478496 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0103 20:16:28.401372  478496 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0103 20:16:28.401471  478496 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 20:16:28.401478  478496 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 20:16:28.401490  478496 cni.go:84] Creating CNI manager for ""
	I0103 20:16:28.401496  478496 cni.go:136] 1 nodes found, recommending kindnet
	I0103 20:16:28.404690  478496 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0103 20:16:28.406484  478496 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0103 20:16:28.432655  478496 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0103 20:16:28.432680  478496 command_runner.go:130] >   Size: 4030506   	Blocks: 7880       IO Block: 4096   regular file
	I0103 20:16:28.432689  478496 command_runner.go:130] > Device: 36h/54d	Inode: 2362850     Links: 1
	I0103 20:16:28.432697  478496 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0103 20:16:28.432703  478496 command_runner.go:130] > Access: 2023-12-04 16:39:54.000000000 +0000
	I0103 20:16:28.432709  478496 command_runner.go:130] > Modify: 2023-12-04 16:39:54.000000000 +0000
	I0103 20:16:28.432716  478496 command_runner.go:130] > Change: 2024-01-03 19:53:10.292911836 +0000
	I0103 20:16:28.432722  478496 command_runner.go:130] >  Birth: 2024-01-03 19:53:10.240912112 +0000
	I0103 20:16:28.433514  478496 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0103 20:16:28.433529  478496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0103 20:16:28.484427  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0103 20:16:29.354883  478496 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0103 20:16:29.361180  478496 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0103 20:16:29.373576  478496 command_runner.go:130] > serviceaccount/kindnet created
	I0103 20:16:29.387137  478496 command_runner.go:130] > daemonset.apps/kindnet created
	I0103 20:16:29.392862  478496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:16:29.393018  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:29.393119  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a minikube.k8s.io/name=multinode-004925 minikube.k8s.io/updated_at=2024_01_03T20_16_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:29.568558  478496 command_runner.go:130] > node/multinode-004925 labeled
	I0103 20:16:29.572620  478496 command_runner.go:130] > -16
	I0103 20:16:29.572656  478496 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0103 20:16:29.572682  478496 ops.go:34] apiserver oom_adj: -16
	I0103 20:16:29.572761  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:29.684087  478496 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 20:16:30.074648  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:30.189434  478496 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 20:16:30.572899  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:30.668625  478496 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 20:16:31.073252  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:31.172132  478496 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 20:16:31.573694  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:31.671431  478496 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 20:16:32.072993  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:32.170482  478496 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 20:16:32.572995  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:32.666022  478496 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 20:16:33.073858  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:33.163761  478496 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 20:16:33.573126  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:33.671821  478496 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 20:16:34.073428  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:34.180675  478496 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 20:16:34.573003  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:34.666659  478496 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 20:16:35.072990  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:35.171525  478496 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 20:16:35.572917  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:35.667865  478496 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 20:16:36.073619  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:36.177569  478496 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 20:16:36.572883  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:36.669719  478496 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 20:16:37.072894  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:37.178920  478496 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 20:16:37.573608  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:37.669629  478496 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 20:16:38.072920  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:38.178903  478496 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 20:16:38.573766  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:38.672657  478496 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 20:16:39.072920  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:39.170279  478496 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 20:16:39.573147  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:39.661619  478496 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 20:16:40.073702  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:40.178078  478496 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 20:16:40.573319  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:40.720377  478496 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0103 20:16:41.072969  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:16:41.182234  478496 command_runner.go:130] > NAME      SECRETS   AGE
	I0103 20:16:41.182254  478496 command_runner.go:130] > default   0         1s
	I0103 20:16:41.182273  478496 kubeadm.go:1088] duration metric: took 11.78931085s to wait for elevateKubeSystemPrivileges.
	I0103 20:16:41.182285  478496 kubeadm.go:406] StartCluster complete in 30.431015766s
	I0103 20:16:41.182301  478496 settings.go:142] acquiring lock: {Name:mk35e0b2d8071191a72193c66ba9549131012420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:16:41.182375  478496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 20:16:41.183088  478496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/kubeconfig: {Name:mkcf9b222e1b36afc1c2e4e412234b0c105c9bc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:16:41.183337  478496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 20:16:41.183620  478496 config.go:182] Loaded profile config "multinode-004925": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:16:41.183650  478496 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 20:16:41.183843  478496 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:16:41.183921  478496 addons.go:69] Setting storage-provisioner=true in profile "multinode-004925"
	I0103 20:16:41.183937  478496 addons.go:237] Setting addon storage-provisioner=true in "multinode-004925"
	I0103 20:16:41.183979  478496 host.go:66] Checking if "multinode-004925" exists ...
	I0103 20:16:41.183992  478496 kapi.go:59] client config for multinode-004925: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/client.key", CAFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bfa00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 20:16:41.184444  478496 cli_runner.go:164] Run: docker container inspect multinode-004925 --format={{.State.Status}}
	I0103 20:16:41.184868  478496 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0103 20:16:41.184892  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:41.184900  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:41.184907  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:41.184912  478496 addons.go:69] Setting default-storageclass=true in profile "multinode-004925"
	I0103 20:16:41.184931  478496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-004925"
	I0103 20:16:41.185111  478496 cert_rotation.go:137] Starting client certificate rotation controller
	I0103 20:16:41.185238  478496 cli_runner.go:164] Run: docker container inspect multinode-004925 --format={{.State.Status}}
	I0103 20:16:41.216962  478496 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I0103 20:16:41.216984  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:41.216992  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:41.216999  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:41.217005  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:41.217012  478496 round_trippers.go:580]     Content-Length: 291
	I0103 20:16:41.217018  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:41 GMT
	I0103 20:16:41.217024  478496 round_trippers.go:580]     Audit-Id: 9c893f9a-f9a2-48f7-b8f1-a3fb1ba8bccb
	I0103 20:16:41.217030  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:41.217053  478496 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8c8554bf-4657-4c73-b569-16c8b8e0483f","resourceVersion":"378","creationTimestamp":"2024-01-03T20:16:28Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0103 20:16:41.217511  478496 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8c8554bf-4657-4c73-b569-16c8b8e0483f","resourceVersion":"378","creationTimestamp":"2024-01-03T20:16:28Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0103 20:16:41.217560  478496 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0103 20:16:41.217566  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:41.217573  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:41.217580  478496 round_trippers.go:473]     Content-Type: application/json
	I0103 20:16:41.217586  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:41.235297  478496 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0103 20:16:41.235319  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:41.235328  478496 round_trippers.go:580]     Audit-Id: 42f5bb39-d9e0-4442-8ad8-4cd156fc4f38
	I0103 20:16:41.235335  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:41.235341  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:41.235348  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:41.235355  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:41.235361  478496 round_trippers.go:580]     Content-Length: 291
	I0103 20:16:41.235379  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:41 GMT
	I0103 20:16:41.236323  478496 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8c8554bf-4657-4c73-b569-16c8b8e0483f","resourceVersion":"382","creationTimestamp":"2024-01-03T20:16:28Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0103 20:16:41.257294  478496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0103 20:16:41.255552  478496 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 20:16:41.260011  478496 kapi.go:59] client config for multinode-004925: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/client.key", CAFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bfa00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 20:16:41.260283  478496 addons.go:237] Setting addon default-storageclass=true in "multinode-004925"
	I0103 20:16:41.260320  478496 host.go:66] Checking if "multinode-004925" exists ...
	I0103 20:16:41.260804  478496 cli_runner.go:164] Run: docker container inspect multinode-004925 --format={{.State.Status}}
	I0103 20:16:41.261045  478496 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:16:41.261062  478496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0103 20:16:41.261106  478496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-004925
	I0103 20:16:41.295741  478496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/multinode-004925/id_rsa Username:docker}
	I0103 20:16:41.306180  478496 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0103 20:16:41.306207  478496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0103 20:16:41.306271  478496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-004925
	I0103 20:16:41.335570  478496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/multinode-004925/id_rsa Username:docker}
	I0103 20:16:41.520041  478496 command_runner.go:130] > apiVersion: v1
	I0103 20:16:41.520105  478496 command_runner.go:130] > data:
	I0103 20:16:41.520124  478496 command_runner.go:130] >   Corefile: |
	I0103 20:16:41.520143  478496 command_runner.go:130] >     .:53 {
	I0103 20:16:41.520162  478496 command_runner.go:130] >         errors
	I0103 20:16:41.520193  478496 command_runner.go:130] >         health {
	I0103 20:16:41.520218  478496 command_runner.go:130] >            lameduck 5s
	I0103 20:16:41.520237  478496 command_runner.go:130] >         }
	I0103 20:16:41.520256  478496 command_runner.go:130] >         ready
	I0103 20:16:41.520289  478496 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0103 20:16:41.520310  478496 command_runner.go:130] >            pods insecure
	I0103 20:16:41.520329  478496 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0103 20:16:41.520348  478496 command_runner.go:130] >            ttl 30
	I0103 20:16:41.520366  478496 command_runner.go:130] >         }
	I0103 20:16:41.520394  478496 command_runner.go:130] >         prometheus :9153
	I0103 20:16:41.520419  478496 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0103 20:16:41.520440  478496 command_runner.go:130] >            max_concurrent 1000
	I0103 20:16:41.520457  478496 command_runner.go:130] >         }
	I0103 20:16:41.520476  478496 command_runner.go:130] >         cache 30
	I0103 20:16:41.520503  478496 command_runner.go:130] >         loop
	I0103 20:16:41.520526  478496 command_runner.go:130] >         reload
	I0103 20:16:41.520546  478496 command_runner.go:130] >         loadbalance
	I0103 20:16:41.520583  478496 command_runner.go:130] >     }
	I0103 20:16:41.520610  478496 command_runner.go:130] > kind: ConfigMap
	I0103 20:16:41.520634  478496 command_runner.go:130] > metadata:
	I0103 20:16:41.520659  478496 command_runner.go:130] >   creationTimestamp: "2024-01-03T20:16:28Z"
	I0103 20:16:41.520677  478496 command_runner.go:130] >   name: coredns
	I0103 20:16:41.520697  478496 command_runner.go:130] >   namespace: kube-system
	I0103 20:16:41.520728  478496 command_runner.go:130] >   resourceVersion: "269"
	I0103 20:16:41.520748  478496 command_runner.go:130] >   uid: 1461df46-5023-46ad-b05e-75538eead9a1
	I0103 20:16:41.524705  478496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0103 20:16:41.526023  478496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0103 20:16:41.552849  478496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0103 20:16:41.685033  478496 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0103 20:16:41.685066  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:41.685076  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:41.685108  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:41.756267  478496 round_trippers.go:574] Response Status: 200 OK in 71 milliseconds
	I0103 20:16:41.756292  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:41.756302  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:41.756308  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:41.756315  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:41.756321  478496 round_trippers.go:580]     Content-Length: 291
	I0103 20:16:41.756328  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:41 GMT
	I0103 20:16:41.756334  478496 round_trippers.go:580]     Audit-Id: 399aaf27-c749-48c6-8105-ca374f2a34f0
	I0103 20:16:41.756349  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:41.756380  478496 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8c8554bf-4657-4c73-b569-16c8b8e0483f","resourceVersion":"391","creationTimestamp":"2024-01-03T20:16:28Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0103 20:16:41.756499  478496 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-004925" context rescaled to 1 replicas
	I0103 20:16:41.756530  478496 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:16:41.759125  478496 out.go:177] * Verifying Kubernetes components...
	I0103 20:16:41.760949  478496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:16:42.175097  478496 command_runner.go:130] > configmap/coredns replaced
	I0103 20:16:42.184551  478496 start.go:929] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0103 20:16:42.400457  478496 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0103 20:16:42.400490  478496 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0103 20:16:42.400499  478496 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0103 20:16:42.400509  478496 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0103 20:16:42.400516  478496 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0103 20:16:42.400521  478496 command_runner.go:130] > pod/storage-provisioner created
	I0103 20:16:42.400549  478496 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0103 20:16:42.400668  478496 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0103 20:16:42.400687  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:42.400698  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:42.400713  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:42.401263  478496 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 20:16:42.401579  478496 kapi.go:59] client config for multinode-004925: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/client.key", CAFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bfa00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 20:16:42.401890  478496 node_ready.go:35] waiting up to 6m0s for node "multinode-004925" to be "Ready" ...
	I0103 20:16:42.401967  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:42.401977  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:42.402001  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:42.402013  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:42.406453  478496 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0103 20:16:42.406491  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:42.406500  478496 round_trippers.go:580]     Audit-Id: 7b51d409-df2f-41ab-a7f3-231229ab93a8
	I0103 20:16:42.406506  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:42.406550  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:42.406558  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:42.406565  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:42.406576  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:42 GMT
	I0103 20:16:42.406724  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:42.410156  478496 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0103 20:16:42.410186  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:42.410196  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:42.410204  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:42.410210  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:42.410224  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:42.410231  478496 round_trippers.go:580]     Content-Length: 1273
	I0103 20:16:42.410239  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:42 GMT
	I0103 20:16:42.410246  478496 round_trippers.go:580]     Audit-Id: b77cffdd-f71c-4ac0-9f47-261513c1aaab
	I0103 20:16:42.410322  478496 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"410"},"items":[{"metadata":{"name":"standard","uid":"3d3e3e3e-b934-43d5-93fc-27d3622ad7bf","resourceVersion":"401","creationTimestamp":"2024-01-03T20:16:41Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-03T20:16:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0103 20:16:42.410989  478496 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3d3e3e3e-b934-43d5-93fc-27d3622ad7bf","resourceVersion":"401","creationTimestamp":"2024-01-03T20:16:41Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-03T20:16:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0103 20:16:42.411061  478496 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0103 20:16:42.411075  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:42.411084  478496 round_trippers.go:473]     Content-Type: application/json
	I0103 20:16:42.411100  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:42.411114  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:42.415661  478496 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0103 20:16:42.415687  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:42.415697  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:42 GMT
	I0103 20:16:42.415704  478496 round_trippers.go:580]     Audit-Id: 9f88da28-1455-45e2-b20e-3a97e834da1d
	I0103 20:16:42.415711  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:42.415717  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:42.415723  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:42.415737  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:42.415746  478496 round_trippers.go:580]     Content-Length: 1220
	I0103 20:16:42.415796  478496 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3d3e3e3e-b934-43d5-93fc-27d3622ad7bf","resourceVersion":"401","creationTimestamp":"2024-01-03T20:16:41Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-03T20:16:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0103 20:16:42.419227  478496 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0103 20:16:42.421567  478496 addons.go:508] enable addons completed in 1.237715495s: enabled=[storage-provisioner default-storageclass]
	I0103 20:16:42.902134  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:42.902159  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:42.902169  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:42.902177  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:42.904851  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:42.904878  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:42.904887  478496 round_trippers.go:580]     Audit-Id: 15e8441e-f5fe-44e3-8ade-0b0628b4eddb
	I0103 20:16:42.904898  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:42.904911  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:42.904920  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:42.904927  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:42.904936  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:42 GMT
	I0103 20:16:42.905078  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:43.402724  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:43.402757  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:43.402766  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:43.402774  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:43.405590  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:43.405617  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:43.405627  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:43.405634  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:43.405648  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:43.405655  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:43 GMT
	I0103 20:16:43.405665  478496 round_trippers.go:580]     Audit-Id: 775b5c9e-09f9-48bd-a1b5-0507cd28abbd
	I0103 20:16:43.405674  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:43.405789  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:43.902967  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:43.902991  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:43.903001  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:43.903009  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:43.905703  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:43.905726  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:43.905735  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:43 GMT
	I0103 20:16:43.905742  478496 round_trippers.go:580]     Audit-Id: 291acfdf-7d3a-44fc-a29d-14f5c9d055fb
	I0103 20:16:43.905748  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:43.905771  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:43.905784  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:43.905790  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:43.905928  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:44.402378  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:44.402404  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:44.402414  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:44.402422  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:44.405049  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:44.405119  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:44.405132  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:44.405139  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:44.405146  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:44.405152  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:44 GMT
	I0103 20:16:44.405158  478496 round_trippers.go:580]     Audit-Id: dd0e47f9-e213-4fc7-b088-ceef910b1a9b
	I0103 20:16:44.405165  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:44.405295  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:44.405741  478496 node_ready.go:58] node "multinode-004925" has status "Ready":"False"
	I0103 20:16:44.903064  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:44.903096  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:44.903107  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:44.903114  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:44.905842  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:44.905866  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:44.905875  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:44 GMT
	I0103 20:16:44.905882  478496 round_trippers.go:580]     Audit-Id: 9520a6c0-e2ed-468c-a438-2c28b33a4ac0
	I0103 20:16:44.905888  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:44.905901  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:44.905908  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:44.905915  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:44.906050  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:45.403040  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:45.403065  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:45.403076  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:45.403083  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:45.405831  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:45.405859  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:45.405869  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:45.405875  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:45.405918  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:45 GMT
	I0103 20:16:45.405926  478496 round_trippers.go:580]     Audit-Id: cb14571a-f115-46aa-aacf-094b0d4b7c1a
	I0103 20:16:45.405932  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:45.405938  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:45.406096  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:45.902730  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:45.902757  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:45.902767  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:45.902775  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:45.905448  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:45.905474  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:45.905485  478496 round_trippers.go:580]     Audit-Id: a8dfa1b0-3974-459d-b570-d50c741c8949
	I0103 20:16:45.905492  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:45.905522  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:45.905536  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:45.905542  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:45.905549  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:45 GMT
	I0103 20:16:45.905693  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:46.402904  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:46.402931  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:46.402941  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:46.402948  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:46.405797  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:46.405832  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:46.405841  478496 round_trippers.go:580]     Audit-Id: 997024f4-88a9-4b3a-9aa2-eb326bf1ab84
	I0103 20:16:46.405848  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:46.405855  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:46.405861  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:46.405867  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:46.405877  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:46 GMT
	I0103 20:16:46.406169  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:46.406609  478496 node_ready.go:58] node "multinode-004925" has status "Ready":"False"
	I0103 20:16:46.902885  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:46.902909  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:46.902919  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:46.902927  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:46.905493  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:46.905513  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:46.905533  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:46.905542  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:46.905550  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:46 GMT
	I0103 20:16:46.905557  478496 round_trippers.go:580]     Audit-Id: c114d438-3aeb-454c-8e56-02f528a9cdd5
	I0103 20:16:46.905563  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:46.905570  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:46.905718  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:47.402804  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:47.402829  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:47.402840  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:47.402864  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:47.405450  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:47.405476  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:47.405485  478496 round_trippers.go:580]     Audit-Id: 22ee45b8-04e4-454c-ac97-105912f04868
	I0103 20:16:47.405495  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:47.405502  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:47.405509  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:47.405515  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:47.405522  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:47 GMT
	I0103 20:16:47.405698  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:47.902252  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:47.902278  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:47.902288  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:47.902295  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:47.905192  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:47.905219  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:47.905228  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:47 GMT
	I0103 20:16:47.905234  478496 round_trippers.go:580]     Audit-Id: a8213969-4464-4a9e-a1cc-7669084287dc
	I0103 20:16:47.905241  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:47.905248  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:47.905254  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:47.905261  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:47.905504  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:48.402149  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:48.402171  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:48.402181  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:48.402189  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:48.404850  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:48.404870  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:48.404880  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:48.404887  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:48.404893  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:48.404899  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:48.404906  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:48 GMT
	I0103 20:16:48.404912  478496 round_trippers.go:580]     Audit-Id: 582e9d52-0e7c-464d-9e2b-4efbacdab126
	I0103 20:16:48.405046  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:48.902120  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:48.902148  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:48.902159  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:48.902166  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:48.904753  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:48.904778  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:48.904786  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:48.904793  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:48.904800  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:48 GMT
	I0103 20:16:48.904807  478496 round_trippers.go:580]     Audit-Id: 808610ed-4770-4c7f-b6e3-a9babc650245
	I0103 20:16:48.904813  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:48.904819  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:48.904972  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:48.905360  478496 node_ready.go:58] node "multinode-004925" has status "Ready":"False"
	I0103 20:16:49.402654  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:49.402679  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:49.402693  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:49.402702  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:49.405309  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:49.405341  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:49.405351  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:49.405360  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:49.405370  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:49.405380  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:49.405391  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:49 GMT
	I0103 20:16:49.405397  478496 round_trippers.go:580]     Audit-Id: 7de49523-ce9e-469d-a372-e7a53dc0052e
	I0103 20:16:49.405629  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:49.902273  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:49.902297  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:49.902306  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:49.902314  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:49.904937  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:49.904957  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:49.904968  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:49.904975  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:49 GMT
	I0103 20:16:49.904982  478496 round_trippers.go:580]     Audit-Id: a360d703-004f-4580-a289-56be00f10eb6
	I0103 20:16:49.904988  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:49.904994  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:49.905001  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:49.905226  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:50.402297  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:50.402320  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:50.402330  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:50.402337  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:50.405153  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:50.405177  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:50.405185  478496 round_trippers.go:580]     Audit-Id: 5c5fae94-caa1-492e-8165-c98a56b6e640
	I0103 20:16:50.405192  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:50.405198  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:50.405204  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:50.405210  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:50.405217  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:50 GMT
	I0103 20:16:50.405318  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:50.903001  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:50.903027  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:50.903036  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:50.903044  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:50.905788  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:50.905813  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:50.905822  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:50 GMT
	I0103 20:16:50.905829  478496 round_trippers.go:580]     Audit-Id: c4dbf01c-3d95-412b-802c-9e357732d477
	I0103 20:16:50.905835  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:50.905842  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:50.905848  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:50.905859  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:50.906065  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:50.906460  478496 node_ready.go:58] node "multinode-004925" has status "Ready":"False"
	I0103 20:16:51.403052  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:51.403077  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:51.403086  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:51.403095  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:51.405894  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:51.405919  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:51.405928  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:51.405935  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:51.405941  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:51.405947  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:51.405954  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:51 GMT
	I0103 20:16:51.405960  478496 round_trippers.go:580]     Audit-Id: 46147f36-2376-4d91-ac4d-d535fcd77ca1
	I0103 20:16:51.406226  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:51.902981  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:51.903008  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:51.903018  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:51.903025  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:51.905687  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:51.905713  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:51.905721  478496 round_trippers.go:580]     Audit-Id: dba42c47-e47e-4270-a622-7ade4b28bf9f
	I0103 20:16:51.905728  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:51.905734  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:51.905743  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:51.905750  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:51.905759  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:51 GMT
	I0103 20:16:51.905895  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:52.402676  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:52.402699  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:52.402710  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:52.402717  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:52.405259  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:52.405286  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:52.405294  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:52.405302  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:52 GMT
	I0103 20:16:52.405308  478496 round_trippers.go:580]     Audit-Id: c90cb53d-e030-4e0d-9829-9d02eefb347f
	I0103 20:16:52.405314  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:52.405321  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:52.405333  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:52.405578  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:52.902751  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:52.902775  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:52.902785  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:52.902793  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:52.905410  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:52.905430  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:52.905439  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:52 GMT
	I0103 20:16:52.905445  478496 round_trippers.go:580]     Audit-Id: dfc235e4-01e9-4ea7-ab7f-9b44d20dd9b4
	I0103 20:16:52.905452  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:52.905458  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:52.905464  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:52.905470  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:52.905610  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:53.402642  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:53.402667  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:53.402676  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:53.402684  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:53.406562  478496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 20:16:53.406583  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:53.406592  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:53 GMT
	I0103 20:16:53.406599  478496 round_trippers.go:580]     Audit-Id: 59a747a3-d322-405a-ae15-06ac9a4a5edc
	I0103 20:16:53.406605  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:53.406612  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:53.406618  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:53.406625  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:53.406746  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:53.407131  478496 node_ready.go:58] node "multinode-004925" has status "Ready":"False"
	I0103 20:16:53.902855  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:53.902882  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:53.902892  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:53.902899  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:53.905423  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:53.905443  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:53.905451  478496 round_trippers.go:580]     Audit-Id: 51415e80-3f0a-4448-acf7-8620394708bd
	I0103 20:16:53.905457  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:53.905463  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:53.905469  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:53.905475  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:53.905482  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:53 GMT
	I0103 20:16:53.905615  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:54.402774  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:54.402801  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:54.402811  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:54.402820  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:54.405361  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:54.405383  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:54.405392  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:54.405398  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:54.405405  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:54 GMT
	I0103 20:16:54.405411  478496 round_trippers.go:580]     Audit-Id: 84a3603b-e2d7-482a-b9c9-2e88039d2b64
	I0103 20:16:54.405417  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:54.405424  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:54.405579  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:54.902739  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:54.902766  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:54.902776  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:54.902791  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:54.905459  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:54.905488  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:54.905502  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:54 GMT
	I0103 20:16:54.905509  478496 round_trippers.go:580]     Audit-Id: 03245c13-c533-4374-9670-1b0490564140
	I0103 20:16:54.905515  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:54.905521  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:54.905528  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:54.905540  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:54.905687  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:55.402899  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:55.402939  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:55.402950  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:55.402958  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:55.405626  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:55.405660  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:55.405677  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:55.405685  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:55 GMT
	I0103 20:16:55.405691  478496 round_trippers.go:580]     Audit-Id: 7ecf27d4-eaa8-41bf-b513-bace80079cda
	I0103 20:16:55.405699  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:55.405705  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:55.405711  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:55.405807  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:55.902214  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:55.902243  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:55.902253  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:55.902260  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:55.905045  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:55.905068  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:55.905078  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:55.905084  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:55 GMT
	I0103 20:16:55.905091  478496 round_trippers.go:580]     Audit-Id: fe24da31-8370-4f7b-ab7f-a6fcfe9bfc46
	I0103 20:16:55.905097  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:55.905103  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:55.905109  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:55.905304  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:55.905710  478496 node_ready.go:58] node "multinode-004925" has status "Ready":"False"
	I0103 20:16:56.402947  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:56.402970  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:56.402980  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:56.402988  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:56.405570  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:56.405594  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:56.405602  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:56.405608  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:56.405614  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:56.405621  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:56.405627  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:56 GMT
	I0103 20:16:56.405633  478496 round_trippers.go:580]     Audit-Id: ad05efe8-9b2f-46f9-a8aa-c147ad944918
	I0103 20:16:56.405751  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:56.902796  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:56.902820  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:56.902830  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:56.902837  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:56.905762  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:56.905786  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:56.905795  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:56.905802  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:56.905809  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:56.905816  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:56 GMT
	I0103 20:16:56.905822  478496 round_trippers.go:580]     Audit-Id: 92bb66ea-83ce-48d3-9a7e-2988893aee45
	I0103 20:16:56.905829  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:56.906005  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:57.402575  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:57.402596  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:57.402606  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:57.402613  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:57.405242  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:57.405262  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:57.405272  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:57 GMT
	I0103 20:16:57.405279  478496 round_trippers.go:580]     Audit-Id: 4ae3f38d-92ee-4882-831c-31057456b26b
	I0103 20:16:57.405285  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:57.405291  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:57.405299  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:57.405305  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:57.405459  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:57.902055  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:57.902086  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:57.902101  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:57.902108  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:57.904762  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:57.904782  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:57.904790  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:57.904797  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:57 GMT
	I0103 20:16:57.904810  478496 round_trippers.go:580]     Audit-Id: 56f32e98-fa46-4a89-8f53-69d6d32318f9
	I0103 20:16:57.904817  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:57.904824  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:57.904830  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:57.905078  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:58.402813  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:58.402841  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:58.402851  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:58.402858  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:58.406580  478496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 20:16:58.406604  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:58.406613  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:58.406621  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:58 GMT
	I0103 20:16:58.406627  478496 round_trippers.go:580]     Audit-Id: 4d6bf8bf-0f8e-4009-848b-b0f99e2dc77c
	I0103 20:16:58.406633  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:58.406640  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:58.406646  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:58.406769  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:58.407159  478496 node_ready.go:58] node "multinode-004925" has status "Ready":"False"
	I0103 20:16:58.903092  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:58.903117  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:58.903127  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:58.903134  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:58.905815  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:58.905836  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:58.905845  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:58.905852  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:58 GMT
	I0103 20:16:58.905858  478496 round_trippers.go:580]     Audit-Id: 4c65b265-4c66-4053-877d-efe2931f6edd
	I0103 20:16:58.905864  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:58.905870  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:58.905876  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:58.906004  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:59.402115  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:59.402140  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:59.402151  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:59.402158  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:59.404808  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:59.404834  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:59.404843  478496 round_trippers.go:580]     Audit-Id: 6bca7688-aabf-4e90-9303-24c43863a0a1
	I0103 20:16:59.404850  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:59.404861  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:59.404869  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:59.404876  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:59.404882  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:59 GMT
	I0103 20:16:59.404981  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:16:59.902100  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:16:59.902126  478496 round_trippers.go:469] Request Headers:
	I0103 20:16:59.902136  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:16:59.902143  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:16:59.904733  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:16:59.904759  478496 round_trippers.go:577] Response Headers:
	I0103 20:16:59.904768  478496 round_trippers.go:580]     Audit-Id: ea8c0bd1-a607-41f2-adac-e9315eca21e3
	I0103 20:16:59.904775  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:16:59.904781  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:16:59.904788  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:16:59.904794  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:16:59.904801  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:16:59 GMT
	I0103 20:16:59.904987  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:17:00.402595  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:00.402624  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:00.402635  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:00.402643  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:00.406238  478496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 20:17:00.406276  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:00.406286  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:00 GMT
	I0103 20:17:00.406295  478496 round_trippers.go:580]     Audit-Id: 7c232aab-26ac-4e36-bf89-6280f2f015f7
	I0103 20:17:00.406304  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:00.406311  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:00.406318  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:00.406326  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:00.406445  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:17:00.902564  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:00.902590  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:00.902600  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:00.902608  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:00.905608  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:00.905638  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:00.905647  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:00.905654  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:00.905661  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:00.905668  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:00 GMT
	I0103 20:17:00.905674  478496 round_trippers.go:580]     Audit-Id: 96367a97-e2fc-43ef-bdec-56d3b7e4dc14
	I0103 20:17:00.905681  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:00.905891  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:17:00.906303  478496 node_ready.go:58] node "multinode-004925" has status "Ready":"False"
	I0103 20:17:01.402699  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:01.402724  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:01.402735  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:01.402748  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:01.405572  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:01.405597  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:01.405606  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:01.405613  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:01 GMT
	I0103 20:17:01.405620  478496 round_trippers.go:580]     Audit-Id: f7961f5a-137c-43e0-bd07-36dc19d49f5b
	I0103 20:17:01.405626  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:01.405632  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:01.405639  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:01.405774  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:17:01.903044  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:01.903083  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:01.903095  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:01.903103  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:01.905999  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:01.906025  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:01.906035  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:01.906042  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:01.906050  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:01.906056  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:01 GMT
	I0103 20:17:01.906063  478496 round_trippers.go:580]     Audit-Id: 0fff152a-1946-46c8-85b2-68bbd0c1482c
	I0103 20:17:01.906070  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:01.906218  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:17:02.402309  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:02.402339  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:02.402356  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:02.402364  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:02.405422  478496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 20:17:02.405450  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:02.405459  478496 round_trippers.go:580]     Audit-Id: 6042d102-facb-496d-97c7-c530a8907833
	I0103 20:17:02.405466  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:02.405472  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:02.405479  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:02.405485  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:02.405491  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:02 GMT
	I0103 20:17:02.405616  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:17:02.902807  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:02.902836  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:02.902846  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:02.902865  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:02.905811  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:02.905836  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:02.905845  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:02.905851  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:02.905859  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:02 GMT
	I0103 20:17:02.905865  478496 round_trippers.go:580]     Audit-Id: ef7f3082-5fdf-4c2b-b044-4f06c2c5db3f
	I0103 20:17:02.905872  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:02.905878  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:02.906017  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:17:02.906422  478496 node_ready.go:58] node "multinode-004925" has status "Ready":"False"
	I0103 20:17:03.402428  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:03.402453  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:03.402464  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:03.402481  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:03.405404  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:03.405433  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:03.405443  478496 round_trippers.go:580]     Audit-Id: 456c1d21-fc43-4539-b27a-1c702bd40c93
	I0103 20:17:03.405450  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:03.405457  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:03.405463  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:03.405469  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:03.405476  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:03 GMT
	I0103 20:17:03.405608  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:17:03.902435  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:03.902460  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:03.902470  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:03.902487  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:03.905393  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:03.905417  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:03.905425  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:03 GMT
	I0103 20:17:03.905432  478496 round_trippers.go:580]     Audit-Id: 35427639-ce67-401c-bde5-95d2b416ba1c
	I0103 20:17:03.905439  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:03.905445  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:03.905452  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:03.905458  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:03.905653  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:17:04.402767  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:04.402796  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:04.402807  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:04.402830  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:04.405596  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:04.405624  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:04.405634  478496 round_trippers.go:580]     Audit-Id: 7b088740-7bdd-4028-89bc-925f13ec2bca
	I0103 20:17:04.405641  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:04.405647  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:04.405654  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:04.405660  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:04.405667  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:04 GMT
	I0103 20:17:04.405783  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:17:04.902987  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:04.903023  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:04.903034  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:04.903050  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:04.905933  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:04.905960  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:04.905968  478496 round_trippers.go:580]     Audit-Id: 0453abf5-c91d-471b-a825-702c418c643f
	I0103 20:17:04.905977  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:04.905983  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:04.905989  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:04.905995  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:04.906002  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:04 GMT
	I0103 20:17:04.906221  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:17:04.906663  478496 node_ready.go:58] node "multinode-004925" has status "Ready":"False"
	I0103 20:17:05.402863  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:05.402889  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:05.402901  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:05.402908  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:05.405663  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:05.405687  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:05.405695  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:05.405702  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:05.405709  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:05.405715  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:05.405722  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:05 GMT
	I0103 20:17:05.405728  478496 round_trippers.go:580]     Audit-Id: c2ba36d6-ba5f-4cec-b5d2-219d2c672049
	I0103 20:17:05.405882  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:17:05.903083  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:05.903110  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:05.903121  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:05.903128  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:05.905937  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:05.905963  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:05.905972  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:05.905979  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:05 GMT
	I0103 20:17:05.905986  478496 round_trippers.go:580]     Audit-Id: e1e2b1b5-f77c-4afa-a3bc-62c9aac4782e
	I0103 20:17:05.905992  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:05.905998  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:05.906004  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:05.906142  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:17:06.402252  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:06.402280  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:06.402291  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:06.402299  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:06.405279  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:06.405304  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:06.405314  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:06.405322  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:06 GMT
	I0103 20:17:06.405328  478496 round_trippers.go:580]     Audit-Id: e463d9d7-6ce6-4b7d-92fd-3d7bc35993f5
	I0103 20:17:06.405335  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:06.405341  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:06.405348  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:06.405562  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:17:06.902128  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:06.902158  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:06.902169  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:06.902176  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:06.905151  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:06.905174  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:06.905183  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:06.905189  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:06.905196  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:06.905202  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:06 GMT
	I0103 20:17:06.905209  478496 round_trippers.go:580]     Audit-Id: c687a31c-0f60-41e0-9774-9197994d9027
	I0103 20:17:06.905216  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:06.905417  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:17:07.403079  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:07.403104  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:07.403114  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:07.403121  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:07.407693  478496 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0103 20:17:07.407716  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:07.407726  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:07.407733  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:07.407739  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:07.407746  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:07.407752  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:07 GMT
	I0103 20:17:07.407759  478496 round_trippers.go:580]     Audit-Id: f209b019-e879-4809-aa98-b9213592043e
	I0103 20:17:07.407898  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:17:07.408313  478496 node_ready.go:58] node "multinode-004925" has status "Ready":"False"
	I0103 20:17:07.902092  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:07.902131  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:07.902141  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:07.902161  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:07.905025  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:07.905052  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:07.905062  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:07.905069  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:07.905075  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:07.905082  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:07 GMT
	I0103 20:17:07.905088  478496 round_trippers.go:580]     Audit-Id: 11b51336-7a25-4876-b0c7-489bb0a4e362
	I0103 20:17:07.905094  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:07.905225  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:17:08.402744  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:08.402772  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:08.402783  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:08.402790  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:08.405315  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:08.405340  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:08.405349  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:08.405355  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:08.405362  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:08.405368  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:08.405375  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:08 GMT
	I0103 20:17:08.405388  478496 round_trippers.go:580]     Audit-Id: 95dae524-307c-46c7-88fe-031b88968f0f
	I0103 20:17:08.405618  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:17:08.902755  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:08.902780  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:08.902790  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:08.902798  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:08.905369  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:08.905397  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:08.905406  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:08.905413  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:08.905420  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:08.905426  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:08 GMT
	I0103 20:17:08.905433  478496 round_trippers.go:580]     Audit-Id: 704d20f9-1dff-4a4c-910c-4b06936c961f
	I0103 20:17:08.905439  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:08.905621  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:17:09.402887  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:09.402912  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:09.402923  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:09.402931  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:09.405450  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:09.405471  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:09.405480  478496 round_trippers.go:580]     Audit-Id: 93c32856-e16b-4571-b3a7-74c54a74868a
	I0103 20:17:09.405486  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:09.405492  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:09.405499  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:09.405506  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:09.405512  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:09 GMT
	I0103 20:17:09.405607  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:17:09.902645  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:09.902671  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:09.902681  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:09.902689  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:09.905369  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:09.905391  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:09.905400  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:09.905407  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:09.905413  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:09.905420  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:09 GMT
	I0103 20:17:09.905426  478496 round_trippers.go:580]     Audit-Id: d0444c12-4c3c-4cac-a8c8-13bb1fc375f7
	I0103 20:17:09.905433  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:09.905640  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:17:09.906042  478496 node_ready.go:58] node "multinode-004925" has status "Ready":"False"
	I0103 20:17:10.402320  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:10.402344  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:10.402354  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:10.402367  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:10.404893  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:10.404915  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:10.404923  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:10.404930  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:10.404937  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:10.404943  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:10 GMT
	I0103 20:17:10.404949  478496 round_trippers.go:580]     Audit-Id: ed47fee0-c9f7-4ca7-86de-9ba592174e4b
	I0103 20:17:10.404956  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:10.405084  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:17:10.902141  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:10.902165  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:10.902175  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:10.902183  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:10.904742  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:10.904770  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:10.904779  478496 round_trippers.go:580]     Audit-Id: e5cebb55-0783-4773-aade-bae08223b8a8
	I0103 20:17:10.904785  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:10.904791  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:10.904798  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:10.904804  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:10.904811  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:10 GMT
	I0103 20:17:10.904940  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:17:11.402045  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:11.402074  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:11.402084  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:11.402092  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:11.404677  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:11.404702  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:11.404710  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:11.404717  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:11.404723  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:11 GMT
	I0103 20:17:11.404730  478496 round_trippers.go:580]     Audit-Id: 8b624064-3950-4c9b-a0a2-6af18854dbf3
	I0103 20:17:11.404736  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:11.404742  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:11.404891  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:17:11.902439  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:11.902464  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:11.902474  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:11.902481  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:11.905073  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:11.905102  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:11.905110  478496 round_trippers.go:580]     Audit-Id: 6650e669-8ade-4b71-99ad-fd2445239246
	I0103 20:17:11.905117  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:11.905123  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:11.905129  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:11.905135  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:11.905143  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:11 GMT
	I0103 20:17:11.905301  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"345","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0103 20:17:12.402137  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:12.402162  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:12.402172  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:12.402180  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:12.404725  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:12.404748  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:12.404756  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:12.404763  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:12.404771  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:12 GMT
	I0103 20:17:12.404777  478496 round_trippers.go:580]     Audit-Id: 248319ea-ff5f-4c49-806b-749bd6b56104
	I0103 20:17:12.404784  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:12.404790  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:12.404990  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"425","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0103 20:17:12.405392  478496 node_ready.go:49] node "multinode-004925" has status "Ready":"True"
	I0103 20:17:12.405405  478496 node_ready.go:38] duration metric: took 30.003492405s waiting for node "multinode-004925" to be "Ready" ...
	I0103 20:17:12.405414  478496 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:17:12.405511  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0103 20:17:12.405516  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:12.405524  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:12.405531  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:12.409055  478496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 20:17:12.409075  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:12.409083  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:12.409089  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:12.409095  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:12.409102  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:12.409109  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:12 GMT
	I0103 20:17:12.409115  478496 round_trippers.go:580]     Audit-Id: 1b131eca-509e-4c7e-8a4c-bda3af0904aa
	I0103 20:17:12.409787  478496 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"431"},"items":[{"metadata":{"name":"coredns-5dd5756b68-g2x92","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f982667b-3ee3-4aaa-9b63-2bee4f32be8f","resourceVersion":"431","creationTimestamp":"2024-01-03T20:16:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"8ee1252d-62a1-4cc7-8c78-b162002b9076","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:16:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ee1252d-62a1-4cc7-8c78-b162002b9076\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I0103 20:17:12.413715  478496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-g2x92" in "kube-system" namespace to be "Ready" ...
	I0103 20:17:12.413811  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g2x92
	I0103 20:17:12.413817  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:12.413825  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:12.413832  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:12.416369  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:12.416388  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:12.416396  478496 round_trippers.go:580]     Audit-Id: 7d0c30c3-24f1-4269-8f0f-8f9447e00381
	I0103 20:17:12.416403  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:12.416409  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:12.416415  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:12.416421  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:12.416428  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:12 GMT
	I0103 20:17:12.416599  478496 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g2x92","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f982667b-3ee3-4aaa-9b63-2bee4f32be8f","resourceVersion":"431","creationTimestamp":"2024-01-03T20:16:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"8ee1252d-62a1-4cc7-8c78-b162002b9076","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:16:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ee1252d-62a1-4cc7-8c78-b162002b9076\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0103 20:17:12.417095  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:12.417112  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:12.417121  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:12.417128  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:12.419330  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:12.419348  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:12.419356  478496 round_trippers.go:580]     Audit-Id: 97ec2410-eb57-47bb-8c9e-19dab5c2a290
	I0103 20:17:12.419362  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:12.419368  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:12.419374  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:12.419383  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:12.419390  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:12 GMT
	I0103 20:17:12.419506  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"425","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0103 20:17:12.914663  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g2x92
	I0103 20:17:12.914689  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:12.914699  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:12.914707  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:12.917349  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:12.917378  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:12.917387  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:12.917395  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:12 GMT
	I0103 20:17:12.917408  478496 round_trippers.go:580]     Audit-Id: f3b1b102-7863-446e-9e28-9d23a873bb2b
	I0103 20:17:12.917415  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:12.917421  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:12.917427  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:12.917601  478496 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g2x92","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f982667b-3ee3-4aaa-9b63-2bee4f32be8f","resourceVersion":"431","creationTimestamp":"2024-01-03T20:16:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"8ee1252d-62a1-4cc7-8c78-b162002b9076","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:16:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ee1252d-62a1-4cc7-8c78-b162002b9076\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0103 20:17:12.918126  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:12.918141  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:12.918150  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:12.918166  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:12.920545  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:12.920605  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:12.920627  478496 round_trippers.go:580]     Audit-Id: 6caf1efe-06da-4bec-baa5-7e986a89ae21
	I0103 20:17:12.920645  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:12.920665  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:12.920695  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:12.920714  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:12.920735  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:12 GMT
	I0103 20:17:12.920887  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"425","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0103 20:17:13.414306  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g2x92
	I0103 20:17:13.414331  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:13.414341  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:13.414364  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:13.417010  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:13.417037  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:13.417046  478496 round_trippers.go:580]     Audit-Id: 003e085a-ac1e-4589-b11f-d1622d4212cf
	I0103 20:17:13.417053  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:13.417059  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:13.417067  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:13.417074  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:13.417080  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:13 GMT
	I0103 20:17:13.417209  478496 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g2x92","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f982667b-3ee3-4aaa-9b63-2bee4f32be8f","resourceVersion":"431","creationTimestamp":"2024-01-03T20:16:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"8ee1252d-62a1-4cc7-8c78-b162002b9076","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:16:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ee1252d-62a1-4cc7-8c78-b162002b9076\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0103 20:17:13.417732  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:13.417750  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:13.417759  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:13.417766  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:13.420089  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:13.420113  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:13.420121  478496 round_trippers.go:580]     Audit-Id: 89e1d59e-7799-4da3-b5fe-e3a04c1a0fb8
	I0103 20:17:13.420128  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:13.420134  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:13.420141  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:13.420147  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:13.420155  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:13 GMT
	I0103 20:17:13.420283  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"425","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0103 20:17:13.914422  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g2x92
	I0103 20:17:13.914448  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:13.914458  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:13.914466  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:13.917065  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:13.917086  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:13.917095  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:13.917101  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:13.917108  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:13.917114  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:13.917121  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:13 GMT
	I0103 20:17:13.917127  478496 round_trippers.go:580]     Audit-Id: bba96fc6-e447-4eac-9348-0e2ca50bab0d
	I0103 20:17:13.917246  478496 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g2x92","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f982667b-3ee3-4aaa-9b63-2bee4f32be8f","resourceVersion":"441","creationTimestamp":"2024-01-03T20:16:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"8ee1252d-62a1-4cc7-8c78-b162002b9076","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:16:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ee1252d-62a1-4cc7-8c78-b162002b9076\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0103 20:17:13.917759  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:13.917768  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:13.917775  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:13.917782  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:13.920049  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:13.920118  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:13.920140  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:13.920158  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:13 GMT
	I0103 20:17:13.920190  478496 round_trippers.go:580]     Audit-Id: 157389ab-3cd5-4f45-92fb-11c91dfbcb94
	I0103 20:17:13.920217  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:13.920230  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:13.920240  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:13.920389  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"425","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0103 20:17:13.920793  478496 pod_ready.go:92] pod "coredns-5dd5756b68-g2x92" in "kube-system" namespace has status "Ready":"True"
	I0103 20:17:13.920815  478496 pod_ready.go:81] duration metric: took 1.507075749s waiting for pod "coredns-5dd5756b68-g2x92" in "kube-system" namespace to be "Ready" ...
	I0103 20:17:13.920826  478496 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-004925" in "kube-system" namespace to be "Ready" ...
	I0103 20:17:13.920885  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-004925
	I0103 20:17:13.920895  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:13.920902  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:13.920909  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:13.923137  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:13.923192  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:13.923223  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:13.923243  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:13 GMT
	I0103 20:17:13.923305  478496 round_trippers.go:580]     Audit-Id: 0460d5ab-cc1e-48a8-8b6c-df4e25a7635a
	I0103 20:17:13.923325  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:13.923357  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:13.923379  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:13.923488  478496 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-004925","namespace":"kube-system","uid":"5cab1935-b192-4f00-b293-deb85397ee0e","resourceVersion":"317","creationTimestamp":"2024-01-03T20:16:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"6c164ba77557851cf6a185bf74f58276","kubernetes.io/config.mirror":"6c164ba77557851cf6a185bf74f58276","kubernetes.io/config.seen":"2024-01-03T20:16:28.322945478Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:16:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0103 20:17:13.923933  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:13.923951  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:13.923959  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:13.923968  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:13.926092  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:13.926172  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:13.926186  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:13.926194  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:13 GMT
	I0103 20:17:13.926200  478496 round_trippers.go:580]     Audit-Id: c63f3922-090c-4483-92c7-c9a2122cbbdd
	I0103 20:17:13.926222  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:13.926236  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:13.926242  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:13.926364  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"425","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0103 20:17:13.926793  478496 pod_ready.go:92] pod "etcd-multinode-004925" in "kube-system" namespace has status "Ready":"True"
	I0103 20:17:13.926811  478496 pod_ready.go:81] duration metric: took 5.977799ms waiting for pod "etcd-multinode-004925" in "kube-system" namespace to be "Ready" ...
	I0103 20:17:13.926836  478496 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-004925" in "kube-system" namespace to be "Ready" ...
	I0103 20:17:13.926909  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-004925
	I0103 20:17:13.926918  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:13.926926  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:13.926934  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:13.929234  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:13.929253  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:13.929261  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:13.929267  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:13.929274  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:13.929280  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:13.929286  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:13 GMT
	I0103 20:17:13.929292  478496 round_trippers.go:580]     Audit-Id: 61dc997d-6d5f-4ed5-a9dd-92ad47f7f87a
	I0103 20:17:13.929450  478496 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-004925","namespace":"kube-system","uid":"7a543b23-069e-4da3-8d6d-c485af508606","resourceVersion":"318","creationTimestamp":"2024-01-03T20:16:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"a2578f205dcd3a65ab5244e64026e843","kubernetes.io/config.mirror":"a2578f205dcd3a65ab5244e64026e843","kubernetes.io/config.seen":"2024-01-03T20:16:19.858659699Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:16:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0103 20:17:13.930042  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:13.930059  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:13.930069  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:13.930076  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:13.932364  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:13.932389  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:13.932398  478496 round_trippers.go:580]     Audit-Id: c3d12470-ac90-4ce4-8f52-adc481c3e0c1
	I0103 20:17:13.932404  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:13.932411  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:13.932417  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:13.932426  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:13.932438  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:13 GMT
	I0103 20:17:13.932685  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"425","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0103 20:17:13.933092  478496 pod_ready.go:92] pod "kube-apiserver-multinode-004925" in "kube-system" namespace has status "Ready":"True"
	I0103 20:17:13.933111  478496 pod_ready.go:81] duration metric: took 6.261669ms waiting for pod "kube-apiserver-multinode-004925" in "kube-system" namespace to be "Ready" ...
	I0103 20:17:13.933122  478496 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-004925" in "kube-system" namespace to be "Ready" ...
	I0103 20:17:13.933184  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-004925
	I0103 20:17:13.933196  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:13.933203  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:13.933211  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:13.935603  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:13.935626  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:13.935634  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:13 GMT
	I0103 20:17:13.935641  478496 round_trippers.go:580]     Audit-Id: c7680fa5-3b9e-46f6-9f1a-322bacf102bb
	I0103 20:17:13.935647  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:13.935653  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:13.935666  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:13.935675  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:13.935838  478496 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-004925","namespace":"kube-system","uid":"9e73201b-daa5-45ae-ab17-a0117f61c545","resourceVersion":"325","creationTimestamp":"2024-01-03T20:16:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"353654849794f26cfddf683d77aa8ece","kubernetes.io/config.mirror":"353654849794f26cfddf683d77aa8ece","kubernetes.io/config.seen":"2024-01-03T20:16:19.858661308Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:16:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0103 20:17:13.936347  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:13.936364  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:13.936372  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:13.936379  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:13.938492  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:13.938512  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:13.938592  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:13.938599  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:13 GMT
	I0103 20:17:13.938609  478496 round_trippers.go:580]     Audit-Id: 1e6a81e9-c84a-4625-b090-bdc0465b7418
	I0103 20:17:13.938616  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:13.938628  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:13.938635  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:13.938904  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"425","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0103 20:17:13.939285  478496 pod_ready.go:92] pod "kube-controller-manager-multinode-004925" in "kube-system" namespace has status "Ready":"True"
	I0103 20:17:13.939302  478496 pod_ready.go:81] duration metric: took 6.172128ms waiting for pod "kube-controller-manager-multinode-004925" in "kube-system" namespace to be "Ready" ...
	I0103 20:17:13.939315  478496 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dz4jl" in "kube-system" namespace to be "Ready" ...
	I0103 20:17:13.939376  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dz4jl
	I0103 20:17:13.939386  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:13.939394  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:13.939401  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:13.941631  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:13.941684  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:13.941705  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:13.941726  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:13.941752  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:13 GMT
	I0103 20:17:13.941761  478496 round_trippers.go:580]     Audit-Id: 741ec62c-e4cb-4d1e-bfc5-7cf716b54784
	I0103 20:17:13.941776  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:13.941789  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:13.941915  478496 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dz4jl","generateName":"kube-proxy-","namespace":"kube-system","uid":"aa4b165f-582a-4c17-a00b-9552514c2006","resourceVersion":"412","creationTimestamp":"2024-01-03T20:16:40Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"295a82b4-4341-4501-ba93-f3574def778a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:16:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"295a82b4-4341-4501-ba93-f3574def778a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0103 20:17:14.002604  478496 request.go:629] Waited for 60.212549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:14.002710  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:14.002749  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:14.002765  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:14.002773  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:14.017898  478496 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0103 20:17:14.017933  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:14.017943  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:14 GMT
	I0103 20:17:14.017950  478496 round_trippers.go:580]     Audit-Id: 886ba156-8641-4b96-b8f5-5e43b248a5f9
	I0103 20:17:14.017957  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:14.017963  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:14.017970  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:14.017976  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:14.018130  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"425","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0103 20:17:14.018551  478496 pod_ready.go:92] pod "kube-proxy-dz4jl" in "kube-system" namespace has status "Ready":"True"
	I0103 20:17:14.018571  478496 pod_ready.go:81] duration metric: took 79.24612ms waiting for pod "kube-proxy-dz4jl" in "kube-system" namespace to be "Ready" ...
	I0103 20:17:14.018583  478496 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-004925" in "kube-system" namespace to be "Ready" ...
	I0103 20:17:14.203019  478496 request.go:629] Waited for 184.367894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-004925
	I0103 20:17:14.203082  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-004925
	I0103 20:17:14.203094  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:14.203103  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:14.203111  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:14.205849  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:14.205873  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:14.205881  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:14 GMT
	I0103 20:17:14.205888  478496 round_trippers.go:580]     Audit-Id: 182f0354-2530-4f0d-9d94-1b60790d3e7f
	I0103 20:17:14.205895  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:14.205903  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:14.205913  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:14.205920  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:14.206122  478496 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-004925","namespace":"kube-system","uid":"7eff8446-bd7f-47a5-9d38-4c8b87c1ddf1","resourceVersion":"322","creationTimestamp":"2024-01-03T20:16:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bba633dfeb251136b1652b1331b3b622","kubernetes.io/config.mirror":"bba633dfeb251136b1652b1331b3b622","kubernetes.io/config.seen":"2024-01-03T20:16:28.322944206Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:16:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0103 20:17:14.402712  478496 request.go:629] Waited for 196.167777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:14.402798  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:17:14.402809  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:14.402818  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:14.402825  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:14.405294  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:14.405318  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:14.405327  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:14 GMT
	I0103 20:17:14.405333  478496 round_trippers.go:580]     Audit-Id: 956f5120-8d21-406c-95c0-dec8242148d5
	I0103 20:17:14.405340  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:14.405364  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:14.405378  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:14.405385  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:14.405494  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"425","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0103 20:17:14.405917  478496 pod_ready.go:92] pod "kube-scheduler-multinode-004925" in "kube-system" namespace has status "Ready":"True"
	I0103 20:17:14.405933  478496 pod_ready.go:81] duration metric: took 387.343189ms waiting for pod "kube-scheduler-multinode-004925" in "kube-system" namespace to be "Ready" ...
	I0103 20:17:14.405945  478496 pod_ready.go:38] duration metric: took 2.000508915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:17:14.405965  478496 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:17:14.406025  478496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:17:14.417577  478496 command_runner.go:130] > 1270
	I0103 20:17:14.418861  478496 api_server.go:72] duration metric: took 32.662299118s to wait for apiserver process to appear ...
	I0103 20:17:14.418880  478496 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:17:14.418899  478496 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0103 20:17:14.427559  478496 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0103 20:17:14.427638  478496 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0103 20:17:14.427649  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:14.427658  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:14.427665  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:14.428814  478496 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0103 20:17:14.428837  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:14.428846  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:14.428853  478496 round_trippers.go:580]     Content-Length: 264
	I0103 20:17:14.428874  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:14 GMT
	I0103 20:17:14.428886  478496 round_trippers.go:580]     Audit-Id: 54080147-a002-4d63-8b36-ad3957519a3f
	I0103 20:17:14.428893  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:14.428905  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:14.428911  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:14.428933  478496 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I0103 20:17:14.429033  478496 api_server.go:141] control plane version: v1.28.4
	I0103 20:17:14.429049  478496 api_server.go:131] duration metric: took 10.163188ms to wait for apiserver health ...
	I0103 20:17:14.429057  478496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:17:14.602423  478496 request.go:629] Waited for 173.301665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0103 20:17:14.602544  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0103 20:17:14.602556  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:14.602566  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:14.602573  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:14.606555  478496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 20:17:14.606584  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:14.606594  478496 round_trippers.go:580]     Audit-Id: 5ca4895a-3349-46bb-b4e6-08d2e79dbb4d
	I0103 20:17:14.606616  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:14.606629  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:14.606642  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:14.606649  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:14.606660  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:14 GMT
	I0103 20:17:14.607406  478496 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"coredns-5dd5756b68-g2x92","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f982667b-3ee3-4aaa-9b63-2bee4f32be8f","resourceVersion":"441","creationTimestamp":"2024-01-03T20:16:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"8ee1252d-62a1-4cc7-8c78-b162002b9076","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:16:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ee1252d-62a1-4cc7-8c78-b162002b9076\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0103 20:17:14.609729  478496 system_pods.go:59] 8 kube-system pods found
	I0103 20:17:14.609756  478496 system_pods.go:61] "coredns-5dd5756b68-g2x92" [f982667b-3ee3-4aaa-9b63-2bee4f32be8f] Running
	I0103 20:17:14.609763  478496 system_pods.go:61] "etcd-multinode-004925" [5cab1935-b192-4f00-b293-deb85397ee0e] Running
	I0103 20:17:14.609768  478496 system_pods.go:61] "kindnet-stdx9" [9371f41f-cf0e-4412-a9cc-aef70db86495] Running
	I0103 20:17:14.609777  478496 system_pods.go:61] "kube-apiserver-multinode-004925" [7a543b23-069e-4da3-8d6d-c485af508606] Running
	I0103 20:17:14.609790  478496 system_pods.go:61] "kube-controller-manager-multinode-004925" [9e73201b-daa5-45ae-ab17-a0117f61c545] Running
	I0103 20:17:14.609795  478496 system_pods.go:61] "kube-proxy-dz4jl" [aa4b165f-582a-4c17-a00b-9552514c2006] Running
	I0103 20:17:14.609801  478496 system_pods.go:61] "kube-scheduler-multinode-004925" [7eff8446-bd7f-47a5-9d38-4c8b87c1ddf1] Running
	I0103 20:17:14.609807  478496 system_pods.go:61] "storage-provisioner" [47ba7e03-3ba6-4a93-80c0-6ff32c31f9bb] Running
	I0103 20:17:14.609814  478496 system_pods.go:74] duration metric: took 180.751402ms to wait for pod list to return data ...
	I0103 20:17:14.609821  478496 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:17:14.802189  478496 request.go:629] Waited for 192.26547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0103 20:17:14.802271  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0103 20:17:14.802283  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:14.802292  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:14.802300  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:14.804695  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:14.804723  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:14.804732  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:14.804739  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:14.804746  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:14.804752  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:14.804780  478496 round_trippers.go:580]     Content-Length: 261
	I0103 20:17:14.804787  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:14 GMT
	I0103 20:17:14.804793  478496 round_trippers.go:580]     Audit-Id: b581191e-4b87-4c99-8b19-40ac893d3738
	I0103 20:17:14.804826  478496 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a4c3fefa-2479-46c5-bbd4-14ab1a412838","resourceVersion":"350","creationTimestamp":"2024-01-03T20:16:40Z"}}]}
	I0103 20:17:14.805083  478496 default_sa.go:45] found service account: "default"
	I0103 20:17:14.805105  478496 default_sa.go:55] duration metric: took 195.273574ms for default service account to be created ...
	I0103 20:17:14.805115  478496 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:17:15.002573  478496 request.go:629] Waited for 197.384808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0103 20:17:15.002632  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0103 20:17:15.002645  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:15.002653  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:15.002661  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:15.014144  478496 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0103 20:17:15.014225  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:15.014250  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:15.014273  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:15.014315  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:15.014346  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:15.014367  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:15 GMT
	I0103 20:17:15.014390  478496 round_trippers.go:580]     Audit-Id: cf4fe0d6-250c-491c-a293-50bbcb777aab
	I0103 20:17:15.016477  478496 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"coredns-5dd5756b68-g2x92","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f982667b-3ee3-4aaa-9b63-2bee4f32be8f","resourceVersion":"441","creationTimestamp":"2024-01-03T20:16:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"8ee1252d-62a1-4cc7-8c78-b162002b9076","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:16:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ee1252d-62a1-4cc7-8c78-b162002b9076\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0103 20:17:15.019101  478496 system_pods.go:86] 8 kube-system pods found
	I0103 20:17:15.019182  478496 system_pods.go:89] "coredns-5dd5756b68-g2x92" [f982667b-3ee3-4aaa-9b63-2bee4f32be8f] Running
	I0103 20:17:15.019218  478496 system_pods.go:89] "etcd-multinode-004925" [5cab1935-b192-4f00-b293-deb85397ee0e] Running
	I0103 20:17:15.019253  478496 system_pods.go:89] "kindnet-stdx9" [9371f41f-cf0e-4412-a9cc-aef70db86495] Running
	I0103 20:17:15.019279  478496 system_pods.go:89] "kube-apiserver-multinode-004925" [7a543b23-069e-4da3-8d6d-c485af508606] Running
	I0103 20:17:15.019300  478496 system_pods.go:89] "kube-controller-manager-multinode-004925" [9e73201b-daa5-45ae-ab17-a0117f61c545] Running
	I0103 20:17:15.019323  478496 system_pods.go:89] "kube-proxy-dz4jl" [aa4b165f-582a-4c17-a00b-9552514c2006] Running
	I0103 20:17:15.019354  478496 system_pods.go:89] "kube-scheduler-multinode-004925" [7eff8446-bd7f-47a5-9d38-4c8b87c1ddf1] Running
	I0103 20:17:15.019378  478496 system_pods.go:89] "storage-provisioner" [47ba7e03-3ba6-4a93-80c0-6ff32c31f9bb] Running
	I0103 20:17:15.019400  478496 system_pods.go:126] duration metric: took 214.272135ms to wait for k8s-apps to be running ...
	I0103 20:17:15.019423  478496 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:17:15.019522  478496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:17:15.038399  478496 system_svc.go:56] duration metric: took 18.964708ms WaitForService to wait for kubelet.
	I0103 20:17:15.038425  478496 kubeadm.go:581] duration metric: took 33.281868444s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:17:15.038456  478496 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:17:15.202908  478496 request.go:629] Waited for 164.376021ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0103 20:17:15.202977  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0103 20:17:15.202989  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:15.202999  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:15.203009  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:15.205896  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:15.205980  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:15.205994  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:15.206001  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:15.206008  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:15 GMT
	I0103 20:17:15.206015  478496 round_trippers.go:580]     Audit-Id: 7c29e27d-403c-4d65-a464-7c07e2343e75
	I0103 20:17:15.206024  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:15.206031  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:15.206170  478496 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"425","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I0103 20:17:15.206649  478496 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0103 20:17:15.206677  478496 node_conditions.go:123] node cpu capacity is 2
	I0103 20:17:15.206689  478496 node_conditions.go:105] duration metric: took 168.227416ms to run NodePressure ...
	I0103 20:17:15.206707  478496 start.go:228] waiting for startup goroutines ...
	I0103 20:17:15.206720  478496 start.go:233] waiting for cluster config update ...
	I0103 20:17:15.206730  478496 start.go:242] writing updated cluster config ...
	I0103 20:17:15.210183  478496 out.go:177] 
	I0103 20:17:15.212366  478496 config.go:182] Loaded profile config "multinode-004925": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:17:15.212468  478496 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/config.json ...
	I0103 20:17:15.214582  478496 out.go:177] * Starting worker node multinode-004925-m02 in cluster multinode-004925
	I0103 20:17:15.216780  478496 cache.go:121] Beginning downloading kic base image for docker with crio
	I0103 20:17:15.218715  478496 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0103 20:17:15.220585  478496 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:17:15.220612  478496 cache.go:56] Caching tarball of preloaded images
	I0103 20:17:15.220640  478496 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 20:17:15.220743  478496 preload.go:174] Found /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0103 20:17:15.220759  478496 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0103 20:17:15.220900  478496 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/config.json ...
	I0103 20:17:15.238705  478496 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0103 20:17:15.238733  478496 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0103 20:17:15.238755  478496 cache.go:194] Successfully downloaded all kic artifacts
	I0103 20:17:15.238784  478496 start.go:365] acquiring machines lock for multinode-004925-m02: {Name:mkfe123d35f16dd3749e3475b7a8dc29803ed0f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:17:15.238922  478496 start.go:369] acquired machines lock for "multinode-004925-m02" in 115.281µs
	I0103 20:17:15.238951  478496 start.go:93] Provisioning new machine with config: &{Name:multinode-004925 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-004925 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0103 20:17:15.239036  478496 start.go:125] createHost starting for "m02" (driver="docker")
	I0103 20:17:15.241552  478496 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0103 20:17:15.241662  478496 start.go:159] libmachine.API.Create for "multinode-004925" (driver="docker")
	I0103 20:17:15.241688  478496 client.go:168] LocalClient.Create starting
	I0103 20:17:15.241785  478496 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem
	I0103 20:17:15.241824  478496 main.go:141] libmachine: Decoding PEM data...
	I0103 20:17:15.241848  478496 main.go:141] libmachine: Parsing certificate...
	I0103 20:17:15.241909  478496 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem
	I0103 20:17:15.241934  478496 main.go:141] libmachine: Decoding PEM data...
	I0103 20:17:15.241949  478496 main.go:141] libmachine: Parsing certificate...
	I0103 20:17:15.242204  478496 cli_runner.go:164] Run: docker network inspect multinode-004925 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0103 20:17:15.260447  478496 network_create.go:77] Found existing network {name:multinode-004925 subnet:0x40027c0f30 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0103 20:17:15.260492  478496 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-004925-m02" container
	I0103 20:17:15.260570  478496 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0103 20:17:15.277563  478496 cli_runner.go:164] Run: docker volume create multinode-004925-m02 --label name.minikube.sigs.k8s.io=multinode-004925-m02 --label created_by.minikube.sigs.k8s.io=true
	I0103 20:17:15.296280  478496 oci.go:103] Successfully created a docker volume multinode-004925-m02
	I0103 20:17:15.296362  478496 cli_runner.go:164] Run: docker run --rm --name multinode-004925-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-004925-m02 --entrypoint /usr/bin/test -v multinode-004925-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I0103 20:17:15.872070  478496 oci.go:107] Successfully prepared a docker volume multinode-004925-m02
	I0103 20:17:15.872106  478496 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:17:15.872126  478496 kic.go:194] Starting extracting preloaded images to volume ...
	I0103 20:17:15.872215  478496 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-004925-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I0103 20:17:20.243852  478496 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-004925-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (4.37158805s)
	I0103 20:17:20.243886  478496 kic.go:203] duration metric: took 4.371757 seconds to extract preloaded images to volume
	W0103 20:17:20.244024  478496 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0103 20:17:20.244151  478496 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0103 20:17:20.315315  478496 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-004925-m02 --name multinode-004925-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-004925-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-004925-m02 --network multinode-004925 --ip 192.168.58.3 --volume multinode-004925-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0103 20:17:20.669653  478496 cli_runner.go:164] Run: docker container inspect multinode-004925-m02 --format={{.State.Running}}
	I0103 20:17:20.692426  478496 cli_runner.go:164] Run: docker container inspect multinode-004925-m02 --format={{.State.Status}}
	I0103 20:17:20.721232  478496 cli_runner.go:164] Run: docker exec multinode-004925-m02 stat /var/lib/dpkg/alternatives/iptables
	I0103 20:17:20.792188  478496 oci.go:144] the created container "multinode-004925-m02" has a running status.
	I0103 20:17:20.792217  478496 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/multinode-004925-m02/id_rsa...
	I0103 20:17:21.791357  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/multinode-004925-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0103 20:17:21.791411  478496 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17885-409390/.minikube/machines/multinode-004925-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0103 20:17:21.824749  478496 cli_runner.go:164] Run: docker container inspect multinode-004925-m02 --format={{.State.Status}}
	I0103 20:17:21.854389  478496 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0103 20:17:21.854413  478496 kic_runner.go:114] Args: [docker exec --privileged multinode-004925-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0103 20:17:21.928300  478496 cli_runner.go:164] Run: docker container inspect multinode-004925-m02 --format={{.State.Status}}
	I0103 20:17:21.960271  478496 machine.go:88] provisioning docker machine ...
	I0103 20:17:21.960306  478496 ubuntu.go:169] provisioning hostname "multinode-004925-m02"
	I0103 20:17:21.960378  478496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-004925-m02
	I0103 20:17:21.995070  478496 main.go:141] libmachine: Using SSH client type: native
	I0103 20:17:21.995492  478496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0103 20:17:21.995511  478496 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-004925-m02 && echo "multinode-004925-m02" | sudo tee /etc/hostname
	I0103 20:17:22.180872  478496 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-004925-m02
	
	I0103 20:17:22.180959  478496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-004925-m02
	I0103 20:17:22.208029  478496 main.go:141] libmachine: Using SSH client type: native
	I0103 20:17:22.208439  478496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0103 20:17:22.208463  478496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-004925-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-004925-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-004925-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:17:22.348074  478496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:17:22.348101  478496 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17885-409390/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-409390/.minikube}
	I0103 20:17:22.348118  478496 ubuntu.go:177] setting up certificates
	I0103 20:17:22.348129  478496 provision.go:83] configureAuth start
	I0103 20:17:22.348187  478496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-004925-m02
	I0103 20:17:22.369022  478496 provision.go:138] copyHostCerts
	I0103 20:17:22.369073  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem
	I0103 20:17:22.369111  478496 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem, removing ...
	I0103 20:17:22.369122  478496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem
	I0103 20:17:22.369208  478496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem (1679 bytes)
	I0103 20:17:22.369311  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem
	I0103 20:17:22.369337  478496 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem, removing ...
	I0103 20:17:22.369345  478496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem
	I0103 20:17:22.369375  478496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem (1078 bytes)
	I0103 20:17:22.369433  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem
	I0103 20:17:22.369459  478496 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem, removing ...
	I0103 20:17:22.369464  478496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem
	I0103 20:17:22.369497  478496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem (1123 bytes)
	I0103 20:17:22.369554  478496 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem org=jenkins.multinode-004925-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-004925-m02]
	I0103 20:17:22.862239  478496 provision.go:172] copyRemoteCerts
	I0103 20:17:22.862344  478496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:17:22.862404  478496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-004925-m02
	I0103 20:17:22.888361  478496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/multinode-004925-m02/id_rsa Username:docker}
	I0103 20:17:22.993714  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0103 20:17:22.993784  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:17:23.032259  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0103 20:17:23.032353  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0103 20:17:23.062820  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0103 20:17:23.062884  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 20:17:23.092873  478496 provision.go:86] duration metric: configureAuth took 744.729511ms
	I0103 20:17:23.092898  478496 ubuntu.go:193] setting minikube options for container-runtime
	I0103 20:17:23.093093  478496 config.go:182] Loaded profile config "multinode-004925": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:17:23.093207  478496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-004925-m02
	I0103 20:17:23.111861  478496 main.go:141] libmachine: Using SSH client type: native
	I0103 20:17:23.112277  478496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0103 20:17:23.112298  478496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:17:23.374328  478496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:17:23.374400  478496 machine.go:91] provisioned docker machine in 1.414105244s
	I0103 20:17:23.374424  478496 client.go:171] LocalClient.Create took 8.13272914s
	I0103 20:17:23.374455  478496 start.go:167] duration metric: libmachine.API.Create for "multinode-004925" took 8.132792687s
	I0103 20:17:23.374498  478496 start.go:300] post-start starting for "multinode-004925-m02" (driver="docker")
	I0103 20:17:23.374583  478496 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:17:23.374686  478496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:17:23.374768  478496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-004925-m02
	I0103 20:17:23.396654  478496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/multinode-004925-m02/id_rsa Username:docker}
	I0103 20:17:23.501537  478496 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:17:23.505708  478496 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0103 20:17:23.505728  478496 command_runner.go:130] > NAME="Ubuntu"
	I0103 20:17:23.505736  478496 command_runner.go:130] > VERSION_ID="22.04"
	I0103 20:17:23.505743  478496 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0103 20:17:23.505749  478496 command_runner.go:130] > VERSION_CODENAME=jammy
	I0103 20:17:23.505754  478496 command_runner.go:130] > ID=ubuntu
	I0103 20:17:23.505764  478496 command_runner.go:130] > ID_LIKE=debian
	I0103 20:17:23.505771  478496 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0103 20:17:23.505777  478496 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0103 20:17:23.505786  478496 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0103 20:17:23.505798  478496 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0103 20:17:23.505805  478496 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0103 20:17:23.505850  478496 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0103 20:17:23.505880  478496 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0103 20:17:23.505896  478496 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0103 20:17:23.505903  478496 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0103 20:17:23.505917  478496 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-409390/.minikube/addons for local assets ...
	I0103 20:17:23.505979  478496 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-409390/.minikube/files for local assets ...
	I0103 20:17:23.506068  478496 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem -> 4147632.pem in /etc/ssl/certs
	I0103 20:17:23.506078  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem -> /etc/ssl/certs/4147632.pem
	I0103 20:17:23.506192  478496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:17:23.516920  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem --> /etc/ssl/certs/4147632.pem (1708 bytes)
	I0103 20:17:23.547306  478496 start.go:303] post-start completed in 172.719329ms
	I0103 20:17:23.547683  478496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-004925-m02
	I0103 20:17:23.569289  478496 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/config.json ...
	I0103 20:17:23.569573  478496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 20:17:23.569618  478496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-004925-m02
	I0103 20:17:23.587787  478496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/multinode-004925-m02/id_rsa Username:docker}
	I0103 20:17:23.684540  478496 command_runner.go:130] > 18%!
	(MISSING)I0103 20:17:23.684617  478496 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0103 20:17:23.690110  478496 command_runner.go:130] > 160G
	I0103 20:17:23.690481  478496 start.go:128] duration metric: createHost completed in 8.451431943s
	I0103 20:17:23.690501  478496 start.go:83] releasing machines lock for "multinode-004925-m02", held for 8.451567491s
	I0103 20:17:23.690590  478496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-004925-m02
	I0103 20:17:23.712071  478496 out.go:177] * Found network options:
	I0103 20:17:23.713834  478496 out.go:177]   - NO_PROXY=192.168.58.2
	W0103 20:17:23.715876  478496 proxy.go:119] fail to check proxy env: Error ip not in block
	W0103 20:17:23.715920  478496 proxy.go:119] fail to check proxy env: Error ip not in block
	I0103 20:17:23.715992  478496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:17:23.716038  478496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-004925-m02
	I0103 20:17:23.716305  478496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:17:23.716372  478496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-004925-m02
	I0103 20:17:23.743976  478496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/multinode-004925-m02/id_rsa Username:docker}
	I0103 20:17:23.758479  478496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/multinode-004925-m02/id_rsa Username:docker}
	I0103 20:17:24.005102  478496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 20:17:24.033299  478496 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0103 20:17:24.037057  478496 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0103 20:17:24.037097  478496 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0103 20:17:24.037106  478496 command_runner.go:130] > Device: b3h/179d	Inode: 2346105     Links: 1
	I0103 20:17:24.037115  478496 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0103 20:17:24.037122  478496 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0103 20:17:24.037136  478496 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0103 20:17:24.037143  478496 command_runner.go:130] > Change: 2024-01-03 19:53:09.608915467 +0000
	I0103 20:17:24.037149  478496 command_runner.go:130] >  Birth: 2024-01-03 19:53:09.608915467 +0000
	I0103 20:17:24.037232  478496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:17:24.064816  478496 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0103 20:17:24.064906  478496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:17:24.115583  478496 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0103 20:17:24.115621  478496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0103 20:17:24.115631  478496 start.go:475] detecting cgroup driver to use...
	I0103 20:17:24.115664  478496 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 20:17:24.115735  478496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:17:24.138803  478496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:17:24.159037  478496 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:17:24.159145  478496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:17:24.177376  478496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:17:24.195496  478496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:17:24.302253  478496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:17:24.416092  478496 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0103 20:17:24.416135  478496 docker.go:219] disabling docker service ...
	I0103 20:17:24.416228  478496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:17:24.439252  478496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:17:24.453126  478496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:17:24.566602  478496 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0103 20:17:24.566694  478496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:17:24.580887  478496 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0103 20:17:24.684537  478496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:17:24.699277  478496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:17:24.717973  478496 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0103 20:17:24.719255  478496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 20:17:24.719347  478496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:17:24.731538  478496 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:17:24.731604  478496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:17:24.743904  478496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:17:24.756189  478496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:17:24.767959  478496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:17:24.779730  478496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:17:24.789931  478496 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0103 20:17:24.790050  478496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:17:24.800267  478496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:17:24.903241  478496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:17:25.034691  478496 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:17:25.034810  478496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:17:25.040568  478496 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0103 20:17:25.040630  478496 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0103 20:17:25.040652  478496 command_runner.go:130] > Device: bch/188d	Inode: 190         Links: 1
	I0103 20:17:25.040674  478496 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0103 20:17:25.040693  478496 command_runner.go:130] > Access: 2024-01-03 20:17:25.014722360 +0000
	I0103 20:17:25.040713  478496 command_runner.go:130] > Modify: 2024-01-03 20:17:25.014722360 +0000
	I0103 20:17:25.040733  478496 command_runner.go:130] > Change: 2024-01-03 20:17:25.014722360 +0000
	I0103 20:17:25.040775  478496 command_runner.go:130] >  Birth: -
	I0103 20:17:25.040904  478496 start.go:543] Will wait 60s for crictl version
	I0103 20:17:25.040981  478496 ssh_runner.go:195] Run: which crictl
	I0103 20:17:25.045551  478496 command_runner.go:130] > /usr/bin/crictl
	I0103 20:17:25.045775  478496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:17:25.089617  478496 command_runner.go:130] > Version:  0.1.0
	I0103 20:17:25.089964  478496 command_runner.go:130] > RuntimeName:  cri-o
	I0103 20:17:25.090010  478496 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0103 20:17:25.090114  478496 command_runner.go:130] > RuntimeApiVersion:  v1
	I0103 20:17:25.093385  478496 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0103 20:17:25.093519  478496 ssh_runner.go:195] Run: crio --version
	I0103 20:17:25.138953  478496 command_runner.go:130] > crio version 1.24.6
	I0103 20:17:25.138979  478496 command_runner.go:130] > Version:          1.24.6
	I0103 20:17:25.138999  478496 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0103 20:17:25.139023  478496 command_runner.go:130] > GitTreeState:     clean
	I0103 20:17:25.139037  478496 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0103 20:17:25.139044  478496 command_runner.go:130] > GoVersion:        go1.18.2
	I0103 20:17:25.139049  478496 command_runner.go:130] > Compiler:         gc
	I0103 20:17:25.139073  478496 command_runner.go:130] > Platform:         linux/arm64
	I0103 20:17:25.139088  478496 command_runner.go:130] > Linkmode:         dynamic
	I0103 20:17:25.139108  478496 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0103 20:17:25.139121  478496 command_runner.go:130] > SeccompEnabled:   true
	I0103 20:17:25.139127  478496 command_runner.go:130] > AppArmorEnabled:  false
	I0103 20:17:25.139273  478496 ssh_runner.go:195] Run: crio --version
	I0103 20:17:25.197248  478496 command_runner.go:130] > crio version 1.24.6
	I0103 20:17:25.197305  478496 command_runner.go:130] > Version:          1.24.6
	I0103 20:17:25.197343  478496 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0103 20:17:25.197363  478496 command_runner.go:130] > GitTreeState:     clean
	I0103 20:17:25.197383  478496 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0103 20:17:25.197411  478496 command_runner.go:130] > GoVersion:        go1.18.2
	I0103 20:17:25.197431  478496 command_runner.go:130] > Compiler:         gc
	I0103 20:17:25.197449  478496 command_runner.go:130] > Platform:         linux/arm64
	I0103 20:17:25.197469  478496 command_runner.go:130] > Linkmode:         dynamic
	I0103 20:17:25.197500  478496 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0103 20:17:25.197520  478496 command_runner.go:130] > SeccompEnabled:   true
	I0103 20:17:25.197539  478496 command_runner.go:130] > AppArmorEnabled:  false
	I0103 20:17:25.202482  478496 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0103 20:17:25.204438  478496 out.go:177]   - env NO_PROXY=192.168.58.2
	I0103 20:17:25.206014  478496 cli_runner.go:164] Run: docker network inspect multinode-004925 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0103 20:17:25.224137  478496 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0103 20:17:25.229087  478496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:17:25.242230  478496 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925 for IP: 192.168.58.3
	I0103 20:17:25.242272  478496 certs.go:190] acquiring lock for shared ca certs: {Name:mk7a87d13d39d2defe5d349d371b78fa1f1e95bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:17:25.242407  478496 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.key
	I0103 20:17:25.242453  478496 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.key
	I0103 20:17:25.242467  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0103 20:17:25.242484  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0103 20:17:25.242498  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0103 20:17:25.242509  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0103 20:17:25.242586  478496 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/414763.pem (1338 bytes)
	W0103 20:17:25.242839  478496 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/414763_empty.pem, impossibly tiny 0 bytes
	I0103 20:17:25.242864  478496 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem (1679 bytes)
	I0103 20:17:25.242906  478496 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:17:25.242937  478496 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:17:25.242966  478496 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem (1679 bytes)
	I0103 20:17:25.243034  478496 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem (1708 bytes)
	I0103 20:17:25.243069  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:17:25.243088  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/414763.pem -> /usr/share/ca-certificates/414763.pem
	I0103 20:17:25.243104  478496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem -> /usr/share/ca-certificates/4147632.pem
	I0103 20:17:25.243765  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:17:25.278695  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:17:25.307875  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:17:25.337261  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:17:25.367601  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:17:25.396548  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/certs/414763.pem --> /usr/share/ca-certificates/414763.pem (1338 bytes)
	I0103 20:17:25.425702  478496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem --> /usr/share/ca-certificates/4147632.pem (1708 bytes)
	I0103 20:17:25.454291  478496 ssh_runner.go:195] Run: openssl version
	I0103 20:17:25.461368  478496 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0103 20:17:25.461755  478496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:17:25.473609  478496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:17:25.478202  478496 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  3 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:17:25.478503  478496 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:17:25.478631  478496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:17:25.486788  478496 command_runner.go:130] > b5213941
	I0103 20:17:25.487244  478496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:17:25.499089  478496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/414763.pem && ln -fs /usr/share/ca-certificates/414763.pem /etc/ssl/certs/414763.pem"
	I0103 20:17:25.510984  478496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/414763.pem
	I0103 20:17:25.515742  478496 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  3 20:01 /usr/share/ca-certificates/414763.pem
	I0103 20:17:25.515818  478496 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 20:01 /usr/share/ca-certificates/414763.pem
	I0103 20:17:25.515887  478496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/414763.pem
	I0103 20:17:25.524268  478496 command_runner.go:130] > 51391683
	I0103 20:17:25.524711  478496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/414763.pem /etc/ssl/certs/51391683.0"
	I0103 20:17:25.536488  478496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4147632.pem && ln -fs /usr/share/ca-certificates/4147632.pem /etc/ssl/certs/4147632.pem"
	I0103 20:17:25.548569  478496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4147632.pem
	I0103 20:17:25.553473  478496 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  3 20:01 /usr/share/ca-certificates/4147632.pem
	I0103 20:17:25.553557  478496 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 20:01 /usr/share/ca-certificates/4147632.pem
	I0103 20:17:25.553634  478496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4147632.pem
	I0103 20:17:25.561857  478496 command_runner.go:130] > 3ec20f2e
	I0103 20:17:25.562271  478496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4147632.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:17:25.574050  478496 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:17:25.578427  478496 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 20:17:25.578574  478496 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 20:17:25.578711  478496 ssh_runner.go:195] Run: crio config
	I0103 20:17:25.626341  478496 command_runner.go:130] ! time="2024-01-03 20:17:25.625977740Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0103 20:17:25.626603  478496 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0103 20:17:25.632313  478496 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0103 20:17:25.632389  478496 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0103 20:17:25.632413  478496 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0103 20:17:25.632425  478496 command_runner.go:130] > #
	I0103 20:17:25.632447  478496 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0103 20:17:25.632458  478496 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0103 20:17:25.632472  478496 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0103 20:17:25.632485  478496 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0103 20:17:25.632494  478496 command_runner.go:130] > # reload'.
	I0103 20:17:25.632503  478496 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0103 20:17:25.632511  478496 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0103 20:17:25.632542  478496 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0103 20:17:25.632555  478496 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0103 20:17:25.632561  478496 command_runner.go:130] > [crio]
	I0103 20:17:25.632570  478496 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0103 20:17:25.632582  478496 command_runner.go:130] > # containers images, in this directory.
	I0103 20:17:25.632591  478496 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0103 20:17:25.632602  478496 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0103 20:17:25.632618  478496 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0103 20:17:25.632629  478496 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0103 20:17:25.632637  478496 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0103 20:17:25.632645  478496 command_runner.go:130] > # storage_driver = "vfs"
	I0103 20:17:25.632653  478496 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0103 20:17:25.632662  478496 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0103 20:17:25.632668  478496 command_runner.go:130] > # storage_option = [
	I0103 20:17:25.632672  478496 command_runner.go:130] > # ]
	I0103 20:17:25.632682  478496 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0103 20:17:25.632693  478496 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0103 20:17:25.632699  478496 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0103 20:17:25.632709  478496 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0103 20:17:25.632717  478496 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0103 20:17:25.632722  478496 command_runner.go:130] > # always happen on a node reboot
	I0103 20:17:25.632730  478496 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0103 20:17:25.632740  478496 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0103 20:17:25.632747  478496 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0103 20:17:25.632764  478496 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0103 20:17:25.632771  478496 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0103 20:17:25.632790  478496 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0103 20:17:25.632810  478496 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0103 20:17:25.632815  478496 command_runner.go:130] > # internal_wipe = true
	I0103 20:17:25.632822  478496 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0103 20:17:25.632836  478496 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0103 20:17:25.632847  478496 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0103 20:17:25.632855  478496 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0103 20:17:25.632868  478496 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0103 20:17:25.632872  478496 command_runner.go:130] > [crio.api]
	I0103 20:17:25.632879  478496 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0103 20:17:25.632885  478496 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0103 20:17:25.632895  478496 command_runner.go:130] > # IP address on which the stream server will listen.
	I0103 20:17:25.632900  478496 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0103 20:17:25.632908  478496 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0103 20:17:25.632915  478496 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0103 20:17:25.632922  478496 command_runner.go:130] > # stream_port = "0"
	I0103 20:17:25.632929  478496 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0103 20:17:25.632941  478496 command_runner.go:130] > # stream_enable_tls = false
	I0103 20:17:25.632949  478496 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0103 20:17:25.632955  478496 command_runner.go:130] > # stream_idle_timeout = ""
	I0103 20:17:25.632965  478496 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0103 20:17:25.632973  478496 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0103 20:17:25.632981  478496 command_runner.go:130] > # minutes.
	I0103 20:17:25.632986  478496 command_runner.go:130] > # stream_tls_cert = ""
	I0103 20:17:25.632994  478496 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0103 20:17:25.633006  478496 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0103 20:17:25.633012  478496 command_runner.go:130] > # stream_tls_key = ""
	I0103 20:17:25.633021  478496 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0103 20:17:25.633032  478496 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0103 20:17:25.633038  478496 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0103 20:17:25.633044  478496 command_runner.go:130] > # stream_tls_ca = ""
	I0103 20:17:25.633055  478496 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0103 20:17:25.633062  478496 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0103 20:17:25.633074  478496 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0103 20:17:25.633080  478496 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0103 20:17:25.633096  478496 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0103 20:17:25.633107  478496 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0103 20:17:25.633112  478496 command_runner.go:130] > [crio.runtime]
	I0103 20:17:25.633120  478496 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0103 20:17:25.633129  478496 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0103 20:17:25.633134  478496 command_runner.go:130] > # "nofile=1024:2048"
	I0103 20:17:25.633142  478496 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0103 20:17:25.633151  478496 command_runner.go:130] > # default_ulimits = [
	I0103 20:17:25.633155  478496 command_runner.go:130] > # ]
	I0103 20:17:25.633163  478496 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0103 20:17:25.633171  478496 command_runner.go:130] > # no_pivot = false
	I0103 20:17:25.633178  478496 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0103 20:17:25.633188  478496 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0103 20:17:25.633198  478496 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0103 20:17:25.633206  478496 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0103 20:17:25.633214  478496 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0103 20:17:25.633222  478496 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0103 20:17:25.633230  478496 command_runner.go:130] > # conmon = ""
	I0103 20:17:25.633235  478496 command_runner.go:130] > # Cgroup setting for conmon
	I0103 20:17:25.633244  478496 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0103 20:17:25.633252  478496 command_runner.go:130] > conmon_cgroup = "pod"
	I0103 20:17:25.633259  478496 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0103 20:17:25.633266  478496 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0103 20:17:25.633277  478496 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0103 20:17:25.633289  478496 command_runner.go:130] > # conmon_env = [
	I0103 20:17:25.633293  478496 command_runner.go:130] > # ]
	I0103 20:17:25.633300  478496 command_runner.go:130] > # Additional environment variables to set for all the
	I0103 20:17:25.633310  478496 command_runner.go:130] > # containers. These are overridden if set in the
	I0103 20:17:25.633317  478496 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0103 20:17:25.633326  478496 command_runner.go:130] > # default_env = [
	I0103 20:17:25.633331  478496 command_runner.go:130] > # ]
	I0103 20:17:25.633338  478496 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0103 20:17:25.633347  478496 command_runner.go:130] > # selinux = false
	I0103 20:17:25.633355  478496 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0103 20:17:25.633362  478496 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0103 20:17:25.633372  478496 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0103 20:17:25.633377  478496 command_runner.go:130] > # seccomp_profile = ""
	I0103 20:17:25.633387  478496 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0103 20:17:25.633394  478496 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0103 20:17:25.633406  478496 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0103 20:17:25.633413  478496 command_runner.go:130] > # which might increase security.
	I0103 20:17:25.633423  478496 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0103 20:17:25.633430  478496 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0103 20:17:25.633441  478496 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0103 20:17:25.633448  478496 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0103 20:17:25.633456  478496 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0103 20:17:25.633464  478496 command_runner.go:130] > # This option supports live configuration reload.
	I0103 20:17:25.633470  478496 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0103 20:17:25.633480  478496 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0103 20:17:25.633486  478496 command_runner.go:130] > # the cgroup blockio controller.
	I0103 20:17:25.633493  478496 command_runner.go:130] > # blockio_config_file = ""
	I0103 20:17:25.633501  478496 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0103 20:17:25.633507  478496 command_runner.go:130] > # irqbalance daemon.
	I0103 20:17:25.633517  478496 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0103 20:17:25.633525  478496 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0103 20:17:25.633534  478496 command_runner.go:130] > # This option supports live configuration reload.
	I0103 20:17:25.633539  478496 command_runner.go:130] > # rdt_config_file = ""
	I0103 20:17:25.633546  478496 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0103 20:17:25.633554  478496 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0103 20:17:25.633562  478496 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0103 20:17:25.633571  478496 command_runner.go:130] > # separate_pull_cgroup = ""
	I0103 20:17:25.633579  478496 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0103 20:17:25.633591  478496 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0103 20:17:25.633596  478496 command_runner.go:130] > # will be added.
	I0103 20:17:25.633604  478496 command_runner.go:130] > # default_capabilities = [
	I0103 20:17:25.633608  478496 command_runner.go:130] > # 	"CHOWN",
	I0103 20:17:25.633616  478496 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0103 20:17:25.633620  478496 command_runner.go:130] > # 	"FSETID",
	I0103 20:17:25.633625  478496 command_runner.go:130] > # 	"FOWNER",
	I0103 20:17:25.633630  478496 command_runner.go:130] > # 	"SETGID",
	I0103 20:17:25.633634  478496 command_runner.go:130] > # 	"SETUID",
	I0103 20:17:25.633642  478496 command_runner.go:130] > # 	"SETPCAP",
	I0103 20:17:25.633650  478496 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0103 20:17:25.633655  478496 command_runner.go:130] > # 	"KILL",
	I0103 20:17:25.633662  478496 command_runner.go:130] > # ]
	I0103 20:17:25.633671  478496 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0103 20:17:25.633683  478496 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0103 20:17:25.633689  478496 command_runner.go:130] > # add_inheritable_capabilities = true
	I0103 20:17:25.633698  478496 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0103 20:17:25.633708  478496 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0103 20:17:25.633713  478496 command_runner.go:130] > # default_sysctls = [
	I0103 20:17:25.633717  478496 command_runner.go:130] > # ]
	I0103 20:17:25.633723  478496 command_runner.go:130] > # List of devices on the host that a
	I0103 20:17:25.633733  478496 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0103 20:17:25.633741  478496 command_runner.go:130] > # allowed_devices = [
	I0103 20:17:25.633746  478496 command_runner.go:130] > # 	"/dev/fuse",
	I0103 20:17:25.633751  478496 command_runner.go:130] > # ]
	I0103 20:17:25.633759  478496 command_runner.go:130] > # List of additional devices. specified as
	I0103 20:17:25.633780  478496 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0103 20:17:25.633791  478496 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0103 20:17:25.633798  478496 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0103 20:17:25.633804  478496 command_runner.go:130] > # additional_devices = [
	I0103 20:17:25.633813  478496 command_runner.go:130] > # ]
	I0103 20:17:25.633819  478496 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0103 20:17:25.633827  478496 command_runner.go:130] > # cdi_spec_dirs = [
	I0103 20:17:25.633834  478496 command_runner.go:130] > # 	"/etc/cdi",
	I0103 20:17:25.633838  478496 command_runner.go:130] > # 	"/var/run/cdi",
	I0103 20:17:25.633845  478496 command_runner.go:130] > # ]
	I0103 20:17:25.633853  478496 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0103 20:17:25.633861  478496 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0103 20:17:25.633869  478496 command_runner.go:130] > # Defaults to false.
	I0103 20:17:25.633875  478496 command_runner.go:130] > # device_ownership_from_security_context = false
	I0103 20:17:25.633885  478496 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0103 20:17:25.633893  478496 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0103 20:17:25.633900  478496 command_runner.go:130] > # hooks_dir = [
	I0103 20:17:25.633906  478496 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0103 20:17:25.633913  478496 command_runner.go:130] > # ]
	I0103 20:17:25.633920  478496 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0103 20:17:25.633928  478496 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0103 20:17:25.633939  478496 command_runner.go:130] > # its default mounts from the following two files:
	I0103 20:17:25.633943  478496 command_runner.go:130] > #
	I0103 20:17:25.633951  478496 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0103 20:17:25.633962  478496 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0103 20:17:25.633969  478496 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0103 20:17:25.633974  478496 command_runner.go:130] > #
	I0103 20:17:25.633983  478496 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0103 20:17:25.633994  478496 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0103 20:17:25.634005  478496 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0103 20:17:25.634011  478496 command_runner.go:130] > #      only add mounts it finds in this file.
	I0103 20:17:25.634018  478496 command_runner.go:130] > #
	I0103 20:17:25.634024  478496 command_runner.go:130] > # default_mounts_file = ""
	I0103 20:17:25.634030  478496 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0103 20:17:25.634040  478496 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0103 20:17:25.634048  478496 command_runner.go:130] > # pids_limit = 0
	I0103 20:17:25.634056  478496 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0103 20:17:25.634064  478496 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0103 20:17:25.634075  478496 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0103 20:17:25.634086  478496 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0103 20:17:25.634094  478496 command_runner.go:130] > # log_size_max = -1
	I0103 20:17:25.634102  478496 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0103 20:17:25.634108  478496 command_runner.go:130] > # log_to_journald = false
	I0103 20:17:25.634118  478496 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0103 20:17:25.634124  478496 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0103 20:17:25.634133  478496 command_runner.go:130] > # Path to directory for container attach sockets.
	I0103 20:17:25.634139  478496 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0103 20:17:25.634146  478496 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0103 20:17:25.634154  478496 command_runner.go:130] > # bind_mount_prefix = ""
	I0103 20:17:25.634161  478496 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0103 20:17:25.634168  478496 command_runner.go:130] > # read_only = false
	I0103 20:17:25.634176  478496 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0103 20:17:25.634186  478496 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0103 20:17:25.634191  478496 command_runner.go:130] > # live configuration reload.
	I0103 20:17:25.634196  478496 command_runner.go:130] > # log_level = "info"
	I0103 20:17:25.634204  478496 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0103 20:17:25.634213  478496 command_runner.go:130] > # This option supports live configuration reload.
	I0103 20:17:25.634219  478496 command_runner.go:130] > # log_filter = ""
	I0103 20:17:25.634228  478496 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0103 20:17:25.634238  478496 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0103 20:17:25.634243  478496 command_runner.go:130] > # separated by comma.
	I0103 20:17:25.634248  478496 command_runner.go:130] > # uid_mappings = ""
	I0103 20:17:25.634256  478496 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0103 20:17:25.634265  478496 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0103 20:17:25.634273  478496 command_runner.go:130] > # separated by comma.
	I0103 20:17:25.634278  478496 command_runner.go:130] > # gid_mappings = ""
	I0103 20:17:25.634294  478496 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0103 20:17:25.634301  478496 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0103 20:17:25.634312  478496 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0103 20:17:25.634317  478496 command_runner.go:130] > # minimum_mappable_uid = -1
	I0103 20:17:25.634327  478496 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0103 20:17:25.634335  478496 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0103 20:17:25.634345  478496 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0103 20:17:25.634350  478496 command_runner.go:130] > # minimum_mappable_gid = -1
	I0103 20:17:25.634362  478496 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0103 20:17:25.634373  478496 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0103 20:17:25.634380  478496 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0103 20:17:25.634386  478496 command_runner.go:130] > # ctr_stop_timeout = 30
	I0103 20:17:25.634395  478496 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0103 20:17:25.634404  478496 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0103 20:17:25.634414  478496 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0103 20:17:25.634420  478496 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0103 20:17:25.634425  478496 command_runner.go:130] > # drop_infra_ctr = true
	I0103 20:17:25.634433  478496 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0103 20:17:25.634443  478496 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0103 20:17:25.634452  478496 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0103 20:17:25.634460  478496 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0103 20:17:25.634467  478496 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0103 20:17:25.634474  478496 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0103 20:17:25.634482  478496 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0103 20:17:25.634490  478496 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0103 20:17:25.634499  478496 command_runner.go:130] > # pinns_path = ""
	I0103 20:17:25.634507  478496 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0103 20:17:25.634534  478496 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0103 20:17:25.634545  478496 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0103 20:17:25.634551  478496 command_runner.go:130] > # default_runtime = "runc"
	I0103 20:17:25.634561  478496 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0103 20:17:25.634571  478496 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0103 20:17:25.634585  478496 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0103 20:17:25.634592  478496 command_runner.go:130] > # creation as a file is not desired either.
	I0103 20:17:25.634605  478496 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0103 20:17:25.634611  478496 command_runner.go:130] > # the hostname is being managed dynamically.
	I0103 20:17:25.634617  478496 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0103 20:17:25.634624  478496 command_runner.go:130] > # ]
	I0103 20:17:25.634632  478496 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0103 20:17:25.634644  478496 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0103 20:17:25.634655  478496 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0103 20:17:25.634662  478496 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0103 20:17:25.634667  478496 command_runner.go:130] > #
	I0103 20:17:25.634677  478496 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0103 20:17:25.634686  478496 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0103 20:17:25.634693  478496 command_runner.go:130] > #  runtime_type = "oci"
	I0103 20:17:25.634699  478496 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0103 20:17:25.634707  478496 command_runner.go:130] > #  privileged_without_host_devices = false
	I0103 20:17:25.634713  478496 command_runner.go:130] > #  allowed_annotations = []
	I0103 20:17:25.634721  478496 command_runner.go:130] > # Where:
	I0103 20:17:25.634728  478496 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0103 20:17:25.634738  478496 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0103 20:17:25.634746  478496 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0103 20:17:25.634753  478496 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0103 20:17:25.634761  478496 command_runner.go:130] > #   in $PATH.
	I0103 20:17:25.634768  478496 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0103 20:17:25.634777  478496 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0103 20:17:25.634784  478496 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0103 20:17:25.634791  478496 command_runner.go:130] > #   state.
	I0103 20:17:25.634799  478496 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0103 20:17:25.634808  478496 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0103 20:17:25.634825  478496 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0103 20:17:25.634832  478496 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0103 20:17:25.634840  478496 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0103 20:17:25.634851  478496 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0103 20:17:25.634857  478496 command_runner.go:130] > #   The currently recognized values are:
	I0103 20:17:25.634865  478496 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0103 20:17:25.634877  478496 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0103 20:17:25.634884  478496 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0103 20:17:25.634892  478496 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0103 20:17:25.634910  478496 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0103 20:17:25.634919  478496 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0103 20:17:25.634928  478496 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0103 20:17:25.634944  478496 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0103 20:17:25.634951  478496 command_runner.go:130] > #   should be moved to the container's cgroup
	I0103 20:17:25.634959  478496 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0103 20:17:25.634965  478496 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0103 20:17:25.634970  478496 command_runner.go:130] > runtime_type = "oci"
	I0103 20:17:25.634977  478496 command_runner.go:130] > runtime_root = "/run/runc"
	I0103 20:17:25.634984  478496 command_runner.go:130] > runtime_config_path = ""
	I0103 20:17:25.634990  478496 command_runner.go:130] > monitor_path = ""
	I0103 20:17:25.634998  478496 command_runner.go:130] > monitor_cgroup = ""
	I0103 20:17:25.635004  478496 command_runner.go:130] > monitor_exec_cgroup = ""
	I0103 20:17:25.635025  478496 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0103 20:17:25.635033  478496 command_runner.go:130] > # running containers
	I0103 20:17:25.635040  478496 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0103 20:17:25.635050  478496 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0103 20:17:25.635059  478496 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0103 20:17:25.635068  478496 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0103 20:17:25.635075  478496 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0103 20:17:25.635085  478496 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0103 20:17:25.635093  478496 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0103 20:17:25.635100  478496 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0103 20:17:25.635107  478496 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0103 20:17:25.635115  478496 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0103 20:17:25.635123  478496 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0103 20:17:25.635130  478496 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0103 20:17:25.635141  478496 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0103 20:17:25.635151  478496 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0103 20:17:25.635161  478496 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0103 20:17:25.635170  478496 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0103 20:17:25.635182  478496 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0103 20:17:25.635200  478496 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0103 20:17:25.635210  478496 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0103 20:17:25.635220  478496 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0103 20:17:25.635227  478496 command_runner.go:130] > # Example:
	I0103 20:17:25.635234  478496 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0103 20:17:25.635239  478496 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0103 20:17:25.635246  478496 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0103 20:17:25.635259  478496 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0103 20:17:25.635264  478496 command_runner.go:130] > # cpuset = 0
	I0103 20:17:25.635272  478496 command_runner.go:130] > # cpushares = "0-1"
	I0103 20:17:25.635276  478496 command_runner.go:130] > # Where:
	I0103 20:17:25.635285  478496 command_runner.go:130] > # The workload name is workload-type.
	I0103 20:17:25.635294  478496 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0103 20:17:25.635302  478496 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0103 20:17:25.635312  478496 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0103 20:17:25.635323  478496 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0103 20:17:25.635333  478496 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0103 20:17:25.635339  478496 command_runner.go:130] > # 
	I0103 20:17:25.635347  478496 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0103 20:17:25.635354  478496 command_runner.go:130] > #
	I0103 20:17:25.635361  478496 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0103 20:17:25.635369  478496 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0103 20:17:25.635381  478496 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0103 20:17:25.635389  478496 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0103 20:17:25.635401  478496 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0103 20:17:25.635406  478496 command_runner.go:130] > [crio.image]
	I0103 20:17:25.635415  478496 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0103 20:17:25.635423  478496 command_runner.go:130] > # default_transport = "docker://"
	I0103 20:17:25.635431  478496 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0103 20:17:25.635441  478496 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0103 20:17:25.635447  478496 command_runner.go:130] > # global_auth_file = ""
	I0103 20:17:25.635455  478496 command_runner.go:130] > # The image used to instantiate infra containers.
	I0103 20:17:25.635462  478496 command_runner.go:130] > # This option supports live configuration reload.
	I0103 20:17:25.635470  478496 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0103 20:17:25.635479  478496 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0103 20:17:25.635489  478496 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0103 20:17:25.635495  478496 command_runner.go:130] > # This option supports live configuration reload.
	I0103 20:17:25.635501  478496 command_runner.go:130] > # pause_image_auth_file = ""
	I0103 20:17:25.635510  478496 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0103 20:17:25.635527  478496 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0103 20:17:25.635535  478496 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0103 20:17:25.635542  478496 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0103 20:17:25.635550  478496 command_runner.go:130] > # pause_command = "/pause"
	I0103 20:17:25.635558  478496 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0103 20:17:25.635566  478496 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0103 20:17:25.635577  478496 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0103 20:17:25.635585  478496 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0103 20:17:25.635592  478496 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0103 20:17:25.635599  478496 command_runner.go:130] > # signature_policy = ""
	I0103 20:17:25.635609  478496 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0103 20:17:25.635619  478496 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0103 20:17:25.635626  478496 command_runner.go:130] > # changing them here.
	I0103 20:17:25.635632  478496 command_runner.go:130] > # insecure_registries = [
	I0103 20:17:25.635638  478496 command_runner.go:130] > # ]
	I0103 20:17:25.635646  478496 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0103 20:17:25.635654  478496 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0103 20:17:25.635662  478496 command_runner.go:130] > # image_volumes = "mkdir"
	I0103 20:17:25.635669  478496 command_runner.go:130] > # Temporary directory to use for storing big files
	I0103 20:17:25.635674  478496 command_runner.go:130] > # big_files_temporary_dir = ""
	I0103 20:17:25.635682  478496 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0103 20:17:25.635689  478496 command_runner.go:130] > # CNI plugins.
	I0103 20:17:25.635694  478496 command_runner.go:130] > [crio.network]
	I0103 20:17:25.635702  478496 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0103 20:17:25.635711  478496 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0103 20:17:25.635716  478496 command_runner.go:130] > # cni_default_network = ""
	I0103 20:17:25.635723  478496 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0103 20:17:25.635729  478496 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0103 20:17:25.635739  478496 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0103 20:17:25.635744  478496 command_runner.go:130] > # plugin_dirs = [
	I0103 20:17:25.635749  478496 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0103 20:17:25.635756  478496 command_runner.go:130] > # ]
	I0103 20:17:25.635763  478496 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0103 20:17:25.635768  478496 command_runner.go:130] > [crio.metrics]
	I0103 20:17:25.635775  478496 command_runner.go:130] > # Globally enable or disable metrics support.
	I0103 20:17:25.635782  478496 command_runner.go:130] > # enable_metrics = false
	I0103 20:17:25.635788  478496 command_runner.go:130] > # Specify enabled metrics collectors.
	I0103 20:17:25.635796  478496 command_runner.go:130] > # Per default all metrics are enabled.
	I0103 20:17:25.635805  478496 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0103 20:17:25.635815  478496 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0103 20:17:25.635823  478496 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0103 20:17:25.635831  478496 command_runner.go:130] > # metrics_collectors = [
	I0103 20:17:25.635836  478496 command_runner.go:130] > # 	"operations",
	I0103 20:17:25.635843  478496 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0103 20:17:25.635852  478496 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0103 20:17:25.635857  478496 command_runner.go:130] > # 	"operations_errors",
	I0103 20:17:25.635862  478496 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0103 20:17:25.635868  478496 command_runner.go:130] > # 	"image_pulls_by_name",
	I0103 20:17:25.635878  478496 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0103 20:17:25.635884  478496 command_runner.go:130] > # 	"image_pulls_failures",
	I0103 20:17:25.635896  478496 command_runner.go:130] > # 	"image_pulls_successes",
	I0103 20:17:25.635901  478496 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0103 20:17:25.635907  478496 command_runner.go:130] > # 	"image_layer_reuse",
	I0103 20:17:25.635915  478496 command_runner.go:130] > # 	"containers_oom_total",
	I0103 20:17:25.635921  478496 command_runner.go:130] > # 	"containers_oom",
	I0103 20:17:25.635928  478496 command_runner.go:130] > # 	"processes_defunct",
	I0103 20:17:25.635934  478496 command_runner.go:130] > # 	"operations_total",
	I0103 20:17:25.635942  478496 command_runner.go:130] > # 	"operations_latency_seconds",
	I0103 20:17:25.635949  478496 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0103 20:17:25.635954  478496 command_runner.go:130] > # 	"operations_errors_total",
	I0103 20:17:25.635960  478496 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0103 20:17:25.635970  478496 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0103 20:17:25.635975  478496 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0103 20:17:25.635983  478496 command_runner.go:130] > # 	"image_pulls_success_total",
	I0103 20:17:25.635989  478496 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0103 20:17:25.635994  478496 command_runner.go:130] > # 	"containers_oom_count_total",
	I0103 20:17:25.635999  478496 command_runner.go:130] > # ]
	I0103 20:17:25.636008  478496 command_runner.go:130] > # The port on which the metrics server will listen.
	I0103 20:17:25.636013  478496 command_runner.go:130] > # metrics_port = 9090
	I0103 20:17:25.636023  478496 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0103 20:17:25.636028  478496 command_runner.go:130] > # metrics_socket = ""
	I0103 20:17:25.636035  478496 command_runner.go:130] > # The certificate for the secure metrics server.
	I0103 20:17:25.636043  478496 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0103 20:17:25.636052  478496 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0103 20:17:25.636059  478496 command_runner.go:130] > # certificate on any modification event.
	I0103 20:17:25.636066  478496 command_runner.go:130] > # metrics_cert = ""
	I0103 20:17:25.636072  478496 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0103 20:17:25.636081  478496 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0103 20:17:25.636087  478496 command_runner.go:130] > # metrics_key = ""
	I0103 20:17:25.636094  478496 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0103 20:17:25.636101  478496 command_runner.go:130] > [crio.tracing]
	I0103 20:17:25.636108  478496 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0103 20:17:25.636115  478496 command_runner.go:130] > # enable_tracing = false
	I0103 20:17:25.636122  478496 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0103 20:17:25.636131  478496 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0103 20:17:25.636140  478496 command_runner.go:130] > # Number of samples to collect per million spans.
	I0103 20:17:25.636149  478496 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0103 20:17:25.636159  478496 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0103 20:17:25.636166  478496 command_runner.go:130] > [crio.stats]
	I0103 20:17:25.636173  478496 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0103 20:17:25.636180  478496 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0103 20:17:25.636191  478496 command_runner.go:130] > # stats_collection_period = 0
	I0103 20:17:25.636270  478496 cni.go:84] Creating CNI manager for ""
	I0103 20:17:25.636281  478496 cni.go:136] 2 nodes found, recommending kindnet
	I0103 20:17:25.636291  478496 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 20:17:25.636310  478496 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-004925 NodeName:multinode-004925-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 20:17:25.636431  478496 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-004925-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:17:25.636493  478496 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-004925-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-004925 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 20:17:25.636566  478496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 20:17:25.646443  478496 command_runner.go:130] > kubeadm
	I0103 20:17:25.646461  478496 command_runner.go:130] > kubectl
	I0103 20:17:25.646473  478496 command_runner.go:130] > kubelet
	I0103 20:17:25.647639  478496 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:17:25.647705  478496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0103 20:17:25.658443  478496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0103 20:17:25.682001  478496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 20:17:25.705476  478496 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0103 20:17:25.710393  478496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:17:25.724851  478496 host.go:66] Checking if "multinode-004925" exists ...
	I0103 20:17:25.725185  478496 config.go:182] Loaded profile config "multinode-004925": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:17:25.725476  478496 start.go:304] JoinCluster: &{Name:multinode-004925 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-004925 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:17:25.725563  478496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0103 20:17:25.725622  478496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-004925
	I0103 20:17:25.744183  478496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/multinode-004925/id_rsa Username:docker}
	I0103 20:17:25.917290  478496 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token t45ge7.ybyqyyc85p1qe632 --discovery-token-ca-cert-hash sha256:497121a93982227783f08bfd7c063fc9f8a8d85d0f5f85a6107fb52aadafec60 
	I0103 20:17:25.921189  478496 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0103 20:17:25.921270  478496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t45ge7.ybyqyyc85p1qe632 --discovery-token-ca-cert-hash sha256:497121a93982227783f08bfd7c063fc9f8a8d85d0f5f85a6107fb52aadafec60 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-004925-m02"
	I0103 20:17:25.964681  478496 command_runner.go:130] > [preflight] Running pre-flight checks
	I0103 20:17:26.017806  478496 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0103 20:17:26.017828  478496 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1051-aws
	I0103 20:17:26.017846  478496 command_runner.go:130] > OS: Linux
	I0103 20:17:26.017854  478496 command_runner.go:130] > CGROUPS_CPU: enabled
	I0103 20:17:26.017861  478496 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0103 20:17:26.017868  478496 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0103 20:17:26.017874  478496 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0103 20:17:26.017887  478496 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0103 20:17:26.017893  478496 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0103 20:17:26.017901  478496 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0103 20:17:26.017907  478496 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0103 20:17:26.017913  478496 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0103 20:17:26.136556  478496 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0103 20:17:26.136587  478496 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0103 20:17:26.175047  478496 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0103 20:17:26.175339  478496 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0103 20:17:26.175352  478496 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0103 20:17:26.283944  478496 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0103 20:17:28.842345  478496 command_runner.go:130] > This node has joined the cluster:
	I0103 20:17:28.842375  478496 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0103 20:17:28.842384  478496 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0103 20:17:28.842393  478496 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0103 20:17:28.845505  478496 command_runner.go:130] ! W0103 20:17:25.964200    1025 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0103 20:17:28.845541  478496 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0103 20:17:28.845556  478496 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0103 20:17:28.845569  478496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t45ge7.ybyqyyc85p1qe632 --discovery-token-ca-cert-hash sha256:497121a93982227783f08bfd7c063fc9f8a8d85d0f5f85a6107fb52aadafec60 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-004925-m02": (2.924271304s)
	I0103 20:17:28.845589  478496 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0103 20:17:29.070267  478496 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0103 20:17:29.070365  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a minikube.k8s.io/name=multinode-004925 minikube.k8s.io/updated_at=2024_01_03T20_17_29_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0103 20:17:29.179264  478496 command_runner.go:130] > node/multinode-004925-m02 labeled
	I0103 20:17:29.182849  478496 start.go:306] JoinCluster complete in 3.457367184s
	I0103 20:17:29.182879  478496 cni.go:84] Creating CNI manager for ""
	I0103 20:17:29.182886  478496 cni.go:136] 2 nodes found, recommending kindnet
	I0103 20:17:29.182940  478496 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0103 20:17:29.188405  478496 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0103 20:17:29.188436  478496 command_runner.go:130] >   Size: 4030506   	Blocks: 7880       IO Block: 4096   regular file
	I0103 20:17:29.188445  478496 command_runner.go:130] > Device: 36h/54d	Inode: 2362850     Links: 1
	I0103 20:17:29.188453  478496 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0103 20:17:29.188470  478496 command_runner.go:130] > Access: 2023-12-04 16:39:54.000000000 +0000
	I0103 20:17:29.188476  478496 command_runner.go:130] > Modify: 2023-12-04 16:39:54.000000000 +0000
	I0103 20:17:29.188482  478496 command_runner.go:130] > Change: 2024-01-03 19:53:10.292911836 +0000
	I0103 20:17:29.188488  478496 command_runner.go:130] >  Birth: 2024-01-03 19:53:10.240912112 +0000
	I0103 20:17:29.188554  478496 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0103 20:17:29.188564  478496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0103 20:17:29.209883  478496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0103 20:17:29.559589  478496 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0103 20:17:29.559617  478496 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0103 20:17:29.559625  478496 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0103 20:17:29.559631  478496 command_runner.go:130] > daemonset.apps/kindnet configured
	I0103 20:17:29.560003  478496 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 20:17:29.560261  478496 kapi.go:59] client config for multinode-004925: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/client.key", CAFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bfa00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 20:17:29.560589  478496 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0103 20:17:29.560597  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:29.560607  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:29.560614  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:29.563350  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:29.563374  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:29.563382  478496 round_trippers.go:580]     Content-Length: 291
	I0103 20:17:29.563389  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:29 GMT
	I0103 20:17:29.563395  478496 round_trippers.go:580]     Audit-Id: 830190c9-8437-4f01-acfa-3c7b34ef59dc
	I0103 20:17:29.563402  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:29.563409  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:29.563420  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:29.563431  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:29.563455  478496 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8c8554bf-4657-4c73-b569-16c8b8e0483f","resourceVersion":"445","creationTimestamp":"2024-01-03T20:16:28Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0103 20:17:29.563550  478496 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-004925" context rescaled to 1 replicas
	I0103 20:17:29.563579  478496 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0103 20:17:29.567175  478496 out.go:177] * Verifying Kubernetes components...
	I0103 20:17:29.568807  478496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:17:29.594770  478496 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 20:17:29.595092  478496 kapi.go:59] client config for multinode-004925: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/profiles/multinode-004925/client.key", CAFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bfa00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 20:17:29.595415  478496 node_ready.go:35] waiting up to 6m0s for node "multinode-004925-m02" to be "Ready" ...
	I0103 20:17:29.595508  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:29.595522  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:29.595532  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:29.595552  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:29.600347  478496 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0103 20:17:29.600369  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:29.600378  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:29 GMT
	I0103 20:17:29.600384  478496 round_trippers.go:580]     Audit-Id: 05f67fc5-980a-4d42-8683-ad9178283467
	I0103 20:17:29.600391  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:29.600397  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:29.600405  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:29.600411  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:29.600618  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"483","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I0103 20:17:30.095790  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:30.095835  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:30.095848  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:30.095857  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:30.098999  478496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 20:17:30.099025  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:30.099034  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:30 GMT
	I0103 20:17:30.099041  478496 round_trippers.go:580]     Audit-Id: 125ae2b3-bda4-425e-ae2c-851e27456283
	I0103 20:17:30.099047  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:30.099054  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:30.099061  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:30.099067  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:30.099674  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"483","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I0103 20:17:30.596371  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:30.596396  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:30.596406  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:30.596414  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:30.599080  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:30.599100  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:30.599109  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:30.599116  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:30.599122  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:30.599128  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:30.599134  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:30 GMT
	I0103 20:17:30.599140  478496 round_trippers.go:580]     Audit-Id: b9148fb4-d665-4f63-a4e3-39e683b28658
	I0103 20:17:30.599275  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"483","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I0103 20:17:31.096257  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:31.096284  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:31.096294  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:31.096302  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:31.098951  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:31.098983  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:31.098993  478496 round_trippers.go:580]     Audit-Id: a499b10d-ec0b-4538-8351-07d46f17e06e
	I0103 20:17:31.099001  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:31.099007  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:31.099013  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:31.099020  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:31.099028  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:31 GMT
	I0103 20:17:31.099169  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"498","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0103 20:17:31.596579  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:31.596608  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:31.596619  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:31.596626  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:31.599304  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:31.599331  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:31.599340  478496 round_trippers.go:580]     Audit-Id: 033d04fc-2387-4596-8511-c2c71afc35b5
	I0103 20:17:31.599347  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:31.599354  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:31.599360  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:31.599367  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:31.599381  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:31 GMT
	I0103 20:17:31.599507  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"498","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0103 20:17:31.599886  478496 node_ready.go:58] node "multinode-004925-m02" has status "Ready":"False"
	I0103 20:17:32.096648  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:32.096672  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:32.096682  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:32.096693  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:32.099166  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:32.099187  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:32.099203  478496 round_trippers.go:580]     Audit-Id: a06b87bc-fbb1-428f-9062-b50187e1f4e9
	I0103 20:17:32.099209  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:32.099216  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:32.099222  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:32.099228  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:32.099235  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:32 GMT
	I0103 20:17:32.099347  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"498","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0103 20:17:32.596115  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:32.596137  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:32.596148  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:32.596155  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:32.598686  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:32.598710  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:32.598718  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:32.598725  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:32 GMT
	I0103 20:17:32.598731  478496 round_trippers.go:580]     Audit-Id: 1ec2297a-844b-444e-8ad6-5d6caec13edc
	I0103 20:17:32.598737  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:32.598745  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:32.598755  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:32.598914  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"498","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0103 20:17:33.095800  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:33.095828  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:33.095838  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:33.095845  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:33.098576  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:33.098602  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:33.098611  478496 round_trippers.go:580]     Audit-Id: 6f0d62ac-0a68-4805-a839-8e94a7debb74
	I0103 20:17:33.098618  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:33.098624  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:33.098631  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:33.098637  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:33.098645  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:33 GMT
	I0103 20:17:33.098812  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"498","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0103 20:17:33.596297  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:33.596324  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:33.596337  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:33.596353  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:33.599120  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:33.599156  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:33.599165  478496 round_trippers.go:580]     Audit-Id: 976fcd0d-1a5b-4c43-9377-ff18cf353179
	I0103 20:17:33.599172  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:33.599195  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:33.599237  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:33.599243  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:33.599250  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:33 GMT
	I0103 20:17:33.599410  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"498","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0103 20:17:34.095639  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:34.095685  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:34.095695  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:34.095703  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:34.098352  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:34.098387  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:34.098397  478496 round_trippers.go:580]     Audit-Id: cb436e63-59ef-47dd-8563-4e750c65d63f
	I0103 20:17:34.098414  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:34.098422  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:34.098433  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:34.098442  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:34.098453  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:34 GMT
	I0103 20:17:34.098606  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"498","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0103 20:17:34.099003  478496 node_ready.go:58] node "multinode-004925-m02" has status "Ready":"False"
	I0103 20:17:34.595695  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:34.595720  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:34.595730  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:34.595737  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:34.598306  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:34.598330  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:34.598339  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:34.598345  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:34.598352  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:34 GMT
	I0103 20:17:34.598358  478496 round_trippers.go:580]     Audit-Id: 118347c6-8f8f-4c8d-83ad-d5201d151e1b
	I0103 20:17:34.598364  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:34.598372  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:34.598734  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"498","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0103 20:17:35.096441  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:35.096472  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:35.096482  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:35.096490  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:35.099511  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:35.099542  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:35.099552  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:35 GMT
	I0103 20:17:35.099559  478496 round_trippers.go:580]     Audit-Id: 79f57804-992e-4c2d-b941-6fe2608b1bbb
	I0103 20:17:35.099566  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:35.099572  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:35.099578  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:35.099585  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:35.099735  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"498","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0103 20:17:35.595653  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:35.595678  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:35.595688  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:35.595695  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:35.598086  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:35.598113  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:35.598121  478496 round_trippers.go:580]     Audit-Id: ff732bc6-f6e1-43fb-8342-485e480e0e05
	I0103 20:17:35.598128  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:35.598134  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:35.598141  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:35.598147  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:35.598157  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:35 GMT
	I0103 20:17:35.598317  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"498","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0103 20:17:36.096282  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:36.096310  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:36.096321  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:36.096328  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:36.098986  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:36.099009  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:36.099019  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:36.099026  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:36 GMT
	I0103 20:17:36.099033  478496 round_trippers.go:580]     Audit-Id: 91410987-5fc3-4ba8-a557-6c389cb9195f
	I0103 20:17:36.099039  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:36.099049  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:36.099056  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:36.099399  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"498","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0103 20:17:36.099800  478496 node_ready.go:58] node "multinode-004925-m02" has status "Ready":"False"
	I0103 20:17:36.595636  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:36.595659  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:36.595670  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:36.595677  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:36.598126  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:36.598147  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:36.598155  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:36.598162  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:36.598168  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:36.598175  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:36 GMT
	I0103 20:17:36.598181  478496 round_trippers.go:580]     Audit-Id: feff38ea-db29-4e4b-8759-04540d186f19
	I0103 20:17:36.598187  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:36.598313  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"498","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0103 20:17:37.095698  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:37.095723  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:37.095744  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:37.095752  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:37.098749  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:37.098774  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:37.098783  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:37.098789  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:37.098796  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:37.098802  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:37.098810  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:37 GMT
	I0103 20:17:37.098816  478496 round_trippers.go:580]     Audit-Id: 5653b536-8a68-414d-b121-55040bf90562
	I0103 20:17:37.099248  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"498","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0103 20:17:37.596412  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:37.596446  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:37.596456  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:37.596463  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:37.598938  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:37.598994  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:37.599003  478496 round_trippers.go:580]     Audit-Id: 6ea81858-9613-4f73-b1d4-26ef516b2231
	I0103 20:17:37.599010  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:37.599016  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:37.599023  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:37.599032  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:37.599051  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:37 GMT
	I0103 20:17:37.599171  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"498","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0103 20:17:38.095660  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:38.095696  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:38.095707  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:38.095714  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:38.098801  478496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 20:17:38.098830  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:38.098840  478496 round_trippers.go:580]     Audit-Id: f87c876f-697a-40ad-a64f-cf0a36b01360
	I0103 20:17:38.098847  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:38.098860  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:38.098866  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:38.098874  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:38.098893  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:38 GMT
	I0103 20:17:38.099039  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"498","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0103 20:17:38.595998  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:38.596026  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:38.596037  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:38.596044  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:38.598533  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:38.598555  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:38.598564  478496 round_trippers.go:580]     Audit-Id: b861ba90-79a9-4790-96a9-7eacee4e1c51
	I0103 20:17:38.598571  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:38.598577  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:38.598583  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:38.598590  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:38.598596  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:38 GMT
	I0103 20:17:38.598796  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"498","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0103 20:17:38.599256  478496 node_ready.go:58] node "multinode-004925-m02" has status "Ready":"False"
	I0103 20:17:39.096122  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:39.096146  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:39.096157  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:39.096164  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:39.099051  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:39.099075  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:39.099090  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:39.099097  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:39.099103  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:39 GMT
	I0103 20:17:39.099110  478496 round_trippers.go:580]     Audit-Id: ad5d9ad7-adf5-48f7-9cc6-561905a64b5a
	I0103 20:17:39.099116  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:39.099123  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:39.099240  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:39.596364  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:39.596403  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:39.596413  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:39.596420  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:39.598975  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:39.598998  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:39.599007  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:39.599014  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:39.599020  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:39.599027  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:39.599034  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:39 GMT
	I0103 20:17:39.599040  478496 round_trippers.go:580]     Audit-Id: 40eaf7c1-5ed4-4b15-b167-8c32162e4b05
	I0103 20:17:39.599205  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:40.096374  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:40.096401  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:40.096412  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:40.096419  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:40.099299  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:40.099329  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:40.099338  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:40 GMT
	I0103 20:17:40.099346  478496 round_trippers.go:580]     Audit-Id: f1244b47-6389-4650-81bf-65c1a527d2cd
	I0103 20:17:40.099352  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:40.099359  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:40.099365  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:40.099374  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:40.099742  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:40.596169  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:40.596195  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:40.596205  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:40.596213  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:40.598657  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:40.598681  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:40.598689  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:40.598696  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:40.598702  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:40.598709  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:40.598719  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:40 GMT
	I0103 20:17:40.598725  478496 round_trippers.go:580]     Audit-Id: 3e41fe23-16e8-4b20-8d1e-79177c92de50
	I0103 20:17:40.599023  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:40.599437  478496 node_ready.go:58] node "multinode-004925-m02" has status "Ready":"False"
	I0103 20:17:41.096265  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:41.096292  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:41.096302  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:41.096309  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:41.098810  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:41.098831  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:41.098840  478496 round_trippers.go:580]     Audit-Id: bd14e4d1-d727-453c-baec-801ee1dfa767
	I0103 20:17:41.098847  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:41.098853  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:41.098859  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:41.098866  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:41.098872  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:41 GMT
	I0103 20:17:41.099025  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:41.595858  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:41.595884  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:41.595895  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:41.595902  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:41.598306  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:41.598326  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:41.598335  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:41.598341  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:41.598347  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:41.598355  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:41.598361  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:41 GMT
	I0103 20:17:41.598367  478496 round_trippers.go:580]     Audit-Id: 39bd1531-f0f1-43b5-8daa-624565c87d85
	I0103 20:17:41.598488  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:42.095682  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:42.095713  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:42.095724  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:42.095769  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:42.113158  478496 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0103 20:17:42.113186  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:42.113195  478496 round_trippers.go:580]     Audit-Id: 9acc3a93-4aae-498c-8f64-87570f5f925b
	I0103 20:17:42.113203  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:42.113210  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:42.113216  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:42.113223  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:42.113229  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:42 GMT
	I0103 20:17:42.113330  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:42.595674  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:42.595700  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:42.595710  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:42.595718  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:42.598293  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:42.598319  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:42.598328  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:42.598335  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:42.598341  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:42.598347  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:42 GMT
	I0103 20:17:42.598353  478496 round_trippers.go:580]     Audit-Id: 15773bd8-f0e5-4fb3-9ff9-58cf955ef878
	I0103 20:17:42.598360  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:42.598623  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:43.095660  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:43.095681  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:43.095691  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:43.095698  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:43.099636  478496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 20:17:43.099661  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:43.099670  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:43.099677  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:43.099683  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:43.099690  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:43 GMT
	I0103 20:17:43.099696  478496 round_trippers.go:580]     Audit-Id: 52ca35c7-83ce-4776-9607-c0fd45e60add
	I0103 20:17:43.099702  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:43.099814  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:43.100205  478496 node_ready.go:58] node "multinode-004925-m02" has status "Ready":"False"
	I0103 20:17:43.596126  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:43.596155  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:43.596165  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:43.596173  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:43.598672  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:43.598692  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:43.598701  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:43.598707  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:43.598714  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:43 GMT
	I0103 20:17:43.598720  478496 round_trippers.go:580]     Audit-Id: ba6cd2ff-8eef-4f88-8d2d-9247df41bd5a
	I0103 20:17:43.598726  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:43.598733  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:43.599114  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:44.095952  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:44.095979  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:44.095989  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:44.096003  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:44.098585  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:44.098614  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:44.098624  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:44 GMT
	I0103 20:17:44.098631  478496 round_trippers.go:580]     Audit-Id: f0a6954e-206e-4e6c-a7f2-68cd5e2454e7
	I0103 20:17:44.098638  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:44.098652  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:44.098664  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:44.098670  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:44.098789  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:44.595917  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:44.595939  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:44.595949  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:44.595957  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:44.598337  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:44.598361  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:44.598369  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:44.598376  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:44 GMT
	I0103 20:17:44.598382  478496 round_trippers.go:580]     Audit-Id: 9cff4eed-dd98-4fc7-81cd-062ca88f5c65
	I0103 20:17:44.598389  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:44.598397  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:44.598405  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:44.598542  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:45.096408  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:45.096435  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:45.096446  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:45.096453  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:45.107424  478496 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0103 20:17:45.107453  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:45.107463  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:45.107471  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:45.107478  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:45.107487  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:45 GMT
	I0103 20:17:45.107494  478496 round_trippers.go:580]     Audit-Id: db77686b-05a2-40ff-8b91-91a9fef9752f
	I0103 20:17:45.107500  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:45.120449  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:45.121013  478496 node_ready.go:58] node "multinode-004925-m02" has status "Ready":"False"
	I0103 20:17:45.596285  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:45.596308  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:45.596319  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:45.596326  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:45.598948  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:45.598976  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:45.598985  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:45.598992  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:45 GMT
	I0103 20:17:45.598999  478496 round_trippers.go:580]     Audit-Id: 383006db-7746-44b4-ba30-07de8ecf5e1b
	I0103 20:17:45.599005  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:45.599015  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:45.599025  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:45.599466  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:46.095924  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:46.095976  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:46.095987  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:46.095995  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:46.098695  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:46.098720  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:46.098730  478496 round_trippers.go:580]     Audit-Id: 7f78ae28-64a2-4449-90d0-5aa10c4bf5d9
	I0103 20:17:46.098737  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:46.098744  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:46.098776  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:46.098790  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:46.098826  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:46 GMT
	I0103 20:17:46.098997  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:46.596572  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:46.596602  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:46.596612  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:46.596622  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:46.599200  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:46.599224  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:46.599233  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:46.599240  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:46.599246  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:46.599253  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:46.599259  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:46 GMT
	I0103 20:17:46.599270  478496 round_trippers.go:580]     Audit-Id: 37a4a3be-fba9-4411-8a31-a72bcd27489d
	I0103 20:17:46.599438  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:47.095711  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:47.095751  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:47.095761  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:47.095768  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:47.098286  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:47.098311  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:47.098319  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:47.098327  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:47 GMT
	I0103 20:17:47.098334  478496 round_trippers.go:580]     Audit-Id: bdf117bc-4d4f-4d72-a596-1afdf8bde0d5
	I0103 20:17:47.098340  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:47.098346  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:47.098353  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:47.098448  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:47.596376  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:47.596402  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:47.596412  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:47.596419  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:47.599091  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:47.599116  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:47.599125  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:47.599132  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:47.599139  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:47.599145  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:47.599152  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:47 GMT
	I0103 20:17:47.599163  478496 round_trippers.go:580]     Audit-Id: 268eb544-9db9-4254-a952-13059ba5c78d
	I0103 20:17:47.599396  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:47.599802  478496 node_ready.go:58] node "multinode-004925-m02" has status "Ready":"False"
	I0103 20:17:48.095808  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:48.095847  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:48.095858  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:48.095865  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:48.099254  478496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 20:17:48.099286  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:48.099295  478496 round_trippers.go:580]     Audit-Id: 54b93926-d2a5-44e0-a394-a7ac9a636198
	I0103 20:17:48.099307  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:48.099314  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:48.099322  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:48.099328  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:48.099340  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:48 GMT
	I0103 20:17:48.099567  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:48.596383  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:48.596418  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:48.596428  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:48.596444  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:48.599019  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:48.599048  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:48.599057  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:48.599064  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:48.599070  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:48 GMT
	I0103 20:17:48.599076  478496 round_trippers.go:580]     Audit-Id: 37ae95a1-cff9-4197-b42e-bfc4affd9c81
	I0103 20:17:48.599088  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:48.599094  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:48.599243  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:49.095999  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:49.096023  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:49.096038  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:49.096046  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:49.098782  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:49.098806  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:49.098817  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:49.098823  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:49.098830  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:49 GMT
	I0103 20:17:49.098836  478496 round_trippers.go:580]     Audit-Id: 0b09bd89-8e28-4dfa-80eb-7c8fc5483d4d
	I0103 20:17:49.098842  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:49.098848  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:49.099063  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:49.596605  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:49.596634  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:49.596644  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:49.596652  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:49.599291  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:49.599319  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:49.599328  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:49 GMT
	I0103 20:17:49.599336  478496 round_trippers.go:580]     Audit-Id: af15b1ec-e28f-4678-ae32-f6067f18f0f8
	I0103 20:17:49.599342  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:49.599349  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:49.599355  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:49.599363  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:49.599604  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:49.600004  478496 node_ready.go:58] node "multinode-004925-m02" has status "Ready":"False"
	I0103 20:17:50.096332  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:50.096360  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:50.096370  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:50.096378  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:50.099131  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:50.099160  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:50.099170  478496 round_trippers.go:580]     Audit-Id: 89a9a59b-9aba-4c9c-81ca-976cd8adee5a
	I0103 20:17:50.099177  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:50.099184  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:50.099223  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:50.099231  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:50.099240  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:50 GMT
	I0103 20:17:50.099398  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:50.596589  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:50.596618  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:50.596628  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:50.596634  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:50.599242  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:50.599263  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:50.599271  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:50 GMT
	I0103 20:17:50.599277  478496 round_trippers.go:580]     Audit-Id: 0517b784-7774-42e4-acc2-797310dd0958
	I0103 20:17:50.599284  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:50.599290  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:50.599296  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:50.599303  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:50.599536  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:51.095671  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:51.095711  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:51.095722  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:51.095730  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:51.098690  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:51.098714  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:51.098723  478496 round_trippers.go:580]     Audit-Id: e7a2f2cd-4ce3-414b-962e-ee054687fce8
	I0103 20:17:51.098729  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:51.098736  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:51.098742  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:51.098748  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:51.098755  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:51 GMT
	I0103 20:17:51.098880  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:51.596220  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:51.596244  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:51.596256  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:51.596264  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:51.598749  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:51.598777  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:51.598786  478496 round_trippers.go:580]     Audit-Id: d2830dc8-e2f6-47df-84a4-c2f1fb647679
	I0103 20:17:51.598793  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:51.598799  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:51.598808  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:51.598815  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:51.598825  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:51 GMT
	I0103 20:17:51.598949  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:52.096098  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:52.096126  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:52.096136  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:52.096143  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:52.098954  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:52.098988  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:52.098997  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:52.099005  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:52 GMT
	I0103 20:17:52.099012  478496 round_trippers.go:580]     Audit-Id: ba5aa3e8-7ff1-4be2-a6df-6b60e47e318b
	I0103 20:17:52.099018  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:52.099024  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:52.099031  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:52.099156  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:52.099604  478496 node_ready.go:58] node "multinode-004925-m02" has status "Ready":"False"
	I0103 20:17:52.596321  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:52.596364  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:52.596373  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:52.596381  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:52.599007  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:52.599033  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:52.599042  478496 round_trippers.go:580]     Audit-Id: f6396bef-7ed4-4b7f-85c4-06211f716959
	I0103 20:17:52.599049  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:52.599055  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:52.599061  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:52.599068  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:52.599075  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:52 GMT
	I0103 20:17:52.599207  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:53.096303  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:53.096334  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:53.096344  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:53.096351  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:53.099066  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:53.099088  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:53.099096  478496 round_trippers.go:580]     Audit-Id: 64cd8cb3-c06e-467c-b0d8-046fcd3f6fb3
	I0103 20:17:53.099103  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:53.099109  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:53.099115  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:53.099121  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:53.099128  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:53 GMT
	I0103 20:17:53.099259  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:53.596594  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:53.596621  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:53.596631  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:53.596639  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:53.599240  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:53.599263  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:53.599272  478496 round_trippers.go:580]     Audit-Id: a883d1e8-af1a-4ef0-9f7c-b43fbbb3890c
	I0103 20:17:53.599279  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:53.599285  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:53.599291  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:53.599300  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:53.599306  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:53 GMT
	I0103 20:17:53.599435  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:54.096516  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:54.096544  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:54.096554  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:54.096561  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:54.099260  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:54.099290  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:54.099299  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:54.099305  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:54.099313  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:54 GMT
	I0103 20:17:54.099319  478496 round_trippers.go:580]     Audit-Id: 4ceb23ea-0668-42c4-883a-ea90bea5cf8a
	I0103 20:17:54.099325  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:54.099332  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:54.099449  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:54.099848  478496 node_ready.go:58] node "multinode-004925-m02" has status "Ready":"False"
	I0103 20:17:54.596170  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:54.596197  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:54.596208  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:54.596215  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:54.598671  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:54.598692  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:54.598701  478496 round_trippers.go:580]     Audit-Id: 2e1dff39-41f5-459d-acdf-e6d00e85b4ab
	I0103 20:17:54.598707  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:54.598713  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:54.598719  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:54.598725  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:54.598732  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:54 GMT
	I0103 20:17:54.598854  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:55.095995  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:55.096024  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:55.096053  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:55.096061  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:55.098835  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:55.098859  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:55.098868  478496 round_trippers.go:580]     Audit-Id: 1ad4001d-e0e5-4e78-ba23-da6c6927d6eb
	I0103 20:17:55.098874  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:55.098881  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:55.098887  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:55.098893  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:55.098900  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:55 GMT
	I0103 20:17:55.099031  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:55.595679  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:55.595705  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:55.595715  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:55.595722  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:55.598125  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:55.598149  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:55.598157  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:55.598166  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:55 GMT
	I0103 20:17:55.598172  478496 round_trippers.go:580]     Audit-Id: 36612810-b39d-4d39-a40e-342cd10f226e
	I0103 20:17:55.598179  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:55.598188  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:55.598194  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:55.598459  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:56.095882  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:56.095912  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:56.095929  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:56.095936  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:56.098485  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:56.098506  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:56.098534  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:56.098543  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:56.098549  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:56 GMT
	I0103 20:17:56.098556  478496 round_trippers.go:580]     Audit-Id: bdea5b44-4957-4351-824d-f7556e6853d1
	I0103 20:17:56.098561  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:56.098568  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:56.098691  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:56.595747  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:56.595773  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:56.595783  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:56.595790  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:56.598364  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:56.598394  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:56.598403  478496 round_trippers.go:580]     Audit-Id: 925a93f4-6373-4642-93e7-9f73a5dbd4db
	I0103 20:17:56.598410  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:56.598417  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:56.598423  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:56.598429  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:56.598439  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:56 GMT
	I0103 20:17:56.598790  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:56.599194  478496 node_ready.go:58] node "multinode-004925-m02" has status "Ready":"False"
	I0103 20:17:57.096438  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:57.096462  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:57.096472  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:57.096479  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:57.098939  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:57.098964  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:57.098973  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:57.098979  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:57.098986  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:57.098994  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:57 GMT
	I0103 20:17:57.099006  478496 round_trippers.go:580]     Audit-Id: 0bcf8dd8-93a6-4849-b016-33f17984bceb
	I0103 20:17:57.099012  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:57.099317  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:57.595676  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:57.595702  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:57.595712  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:57.595719  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:57.598189  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:57.598254  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:57.598277  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:57.598299  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:57.598334  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:57 GMT
	I0103 20:17:57.598358  478496 round_trippers.go:580]     Audit-Id: 0b805dbe-b7c9-4fd6-a2d6-af19f51c91bf
	I0103 20:17:57.598379  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:57.598414  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:57.598599  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:58.095699  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:58.095774  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:58.095798  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:58.095818  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:58.098956  478496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 20:17:58.098978  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:58.098986  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:58.098993  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:58 GMT
	I0103 20:17:58.098999  478496 round_trippers.go:580]     Audit-Id: 505f7841-9fe3-49a8-9fd6-ce51ad0cc867
	I0103 20:17:58.099006  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:58.099014  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:58.099020  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:58.099163  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:58.596584  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:58.596611  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:58.596621  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:58.596629  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:58.599100  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:58.599137  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:58.599146  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:58.599153  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:58.599160  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:58.599167  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:58.599174  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:58 GMT
	I0103 20:17:58.599197  478496 round_trippers.go:580]     Audit-Id: 42528f18-5af4-4318-896c-f52a7b4f654b
	I0103 20:17:58.599524  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:58.599913  478496 node_ready.go:58] node "multinode-004925-m02" has status "Ready":"False"
	I0103 20:17:59.095805  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:59.095861  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:59.095871  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:59.095884  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:59.098451  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:59.098471  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:59.098480  478496 round_trippers.go:580]     Audit-Id: 1871512f-538b-42ed-934a-a275c8ddce33
	I0103 20:17:59.098487  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:59.098493  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:59.098499  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:59.098506  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:59.098512  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:59 GMT
	I0103 20:17:59.098696  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:17:59.596352  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:17:59.596381  478496 round_trippers.go:469] Request Headers:
	I0103 20:17:59.596393  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:17:59.596400  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:17:59.599010  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:17:59.599036  478496 round_trippers.go:577] Response Headers:
	I0103 20:17:59.599044  478496 round_trippers.go:580]     Audit-Id: 51d99d7f-b78d-4b2a-ba90-16bf271fff9d
	I0103 20:17:59.599051  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:17:59.599057  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:17:59.599064  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:17:59.599071  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:17:59.599079  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:17:59 GMT
	I0103 20:17:59.599257  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:18:00.100724  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:18:00.100749  478496 round_trippers.go:469] Request Headers:
	I0103 20:18:00.100759  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:18:00.100766  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:18:00.114272  478496 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0103 20:18:00.114308  478496 round_trippers.go:577] Response Headers:
	I0103 20:18:00.114317  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:18:00.114325  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:18:00.114332  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:18:00.114338  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:18:00.114344  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:18:00 GMT
	I0103 20:18:00.114351  478496 round_trippers.go:580]     Audit-Id: 9ae63861-97d8-4b43-a40f-7a90f142588e
	I0103 20:18:00.116640  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:18:00.595786  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:18:00.595812  478496 round_trippers.go:469] Request Headers:
	I0103 20:18:00.595821  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:18:00.595830  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:18:00.598481  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:18:00.598568  478496 round_trippers.go:577] Response Headers:
	I0103 20:18:00.598591  478496 round_trippers.go:580]     Audit-Id: 7aa92783-0891-4fc1-b500-1a30409669a0
	I0103 20:18:00.598654  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:18:00.598676  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:18:00.598690  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:18:00.598698  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:18:00.598705  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:18:00 GMT
	I0103 20:18:00.598820  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"505","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0103 20:18:01.096354  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:18:01.096378  478496 round_trippers.go:469] Request Headers:
	I0103 20:18:01.096389  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:18:01.096397  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:18:01.098949  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:18:01.099015  478496 round_trippers.go:577] Response Headers:
	I0103 20:18:01.099054  478496 round_trippers.go:580]     Audit-Id: c234b06d-a4ad-431b-a183-99e0b09468fd
	I0103 20:18:01.099078  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:18:01.099097  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:18:01.099111  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:18:01.099130  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:18:01.099138  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:18:01 GMT
	I0103 20:18:01.099274  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"530","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5810 chars]
	I0103 20:18:01.099655  478496 node_ready.go:49] node "multinode-004925-m02" has status "Ready":"True"
	I0103 20:18:01.099672  478496 node_ready.go:38] duration metric: took 31.504235729s waiting for node "multinode-004925-m02" to be "Ready" ...
	I0103 20:18:01.099682  478496 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:18:01.099745  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0103 20:18:01.099756  478496 round_trippers.go:469] Request Headers:
	I0103 20:18:01.099764  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:18:01.099770  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:18:01.103430  478496 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0103 20:18:01.103456  478496 round_trippers.go:577] Response Headers:
	I0103 20:18:01.103466  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:18:01.103473  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:18:01.103480  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:18:01 GMT
	I0103 20:18:01.103486  478496 round_trippers.go:580]     Audit-Id: 0844464d-66c7-472d-90d1-45c7edc8a1bd
	I0103 20:18:01.103492  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:18:01.103499  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:18:01.104042  478496 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"531"},"items":[{"metadata":{"name":"coredns-5dd5756b68-g2x92","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f982667b-3ee3-4aaa-9b63-2bee4f32be8f","resourceVersion":"441","creationTimestamp":"2024-01-03T20:16:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"8ee1252d-62a1-4cc7-8c78-b162002b9076","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:16:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ee1252d-62a1-4cc7-8c78-b162002b9076\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I0103 20:18:01.107165  478496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-g2x92" in "kube-system" namespace to be "Ready" ...
	I0103 20:18:01.107265  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-g2x92
	I0103 20:18:01.107280  478496 round_trippers.go:469] Request Headers:
	I0103 20:18:01.107289  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:18:01.107301  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:18:01.110106  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:18:01.110126  478496 round_trippers.go:577] Response Headers:
	I0103 20:18:01.110134  478496 round_trippers.go:580]     Audit-Id: 5ef503f9-13a1-41e9-93c5-b8706cd79146
	I0103 20:18:01.110140  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:18:01.110146  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:18:01.110153  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:18:01.110159  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:18:01.110166  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:18:01 GMT
	I0103 20:18:01.110265  478496 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-g2x92","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f982667b-3ee3-4aaa-9b63-2bee4f32be8f","resourceVersion":"441","creationTimestamp":"2024-01-03T20:16:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"8ee1252d-62a1-4cc7-8c78-b162002b9076","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:16:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8ee1252d-62a1-4cc7-8c78-b162002b9076\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0103 20:18:01.110834  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:18:01.110843  478496 round_trippers.go:469] Request Headers:
	I0103 20:18:01.110852  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:18:01.110858  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:18:01.113106  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:18:01.113124  478496 round_trippers.go:577] Response Headers:
	I0103 20:18:01.113131  478496 round_trippers.go:580]     Audit-Id: 14ecc646-1d4f-401d-a149-946f915b4046
	I0103 20:18:01.113138  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:18:01.113144  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:18:01.113150  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:18:01.113156  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:18:01.113163  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:18:01 GMT
	I0103 20:18:01.113257  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"425","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0103 20:18:01.113671  478496 pod_ready.go:92] pod "coredns-5dd5756b68-g2x92" in "kube-system" namespace has status "Ready":"True"
	I0103 20:18:01.113682  478496 pod_ready.go:81] duration metric: took 6.48306ms waiting for pod "coredns-5dd5756b68-g2x92" in "kube-system" namespace to be "Ready" ...
	I0103 20:18:01.113692  478496 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-004925" in "kube-system" namespace to be "Ready" ...
	I0103 20:18:01.113754  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-004925
	I0103 20:18:01.113760  478496 round_trippers.go:469] Request Headers:
	I0103 20:18:01.113767  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:18:01.113775  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:18:01.116093  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:18:01.116121  478496 round_trippers.go:577] Response Headers:
	I0103 20:18:01.116131  478496 round_trippers.go:580]     Audit-Id: bc27426d-d68a-4ceb-a2d3-565c521dce69
	I0103 20:18:01.116152  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:18:01.116167  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:18:01.116175  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:18:01.116184  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:18:01.116190  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:18:01 GMT
	I0103 20:18:01.116322  478496 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-004925","namespace":"kube-system","uid":"5cab1935-b192-4f00-b293-deb85397ee0e","resourceVersion":"317","creationTimestamp":"2024-01-03T20:16:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"6c164ba77557851cf6a185bf74f58276","kubernetes.io/config.mirror":"6c164ba77557851cf6a185bf74f58276","kubernetes.io/config.seen":"2024-01-03T20:16:28.322945478Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:16:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0103 20:18:01.116798  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:18:01.116813  478496 round_trippers.go:469] Request Headers:
	I0103 20:18:01.116821  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:18:01.116828  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:18:01.119138  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:18:01.119200  478496 round_trippers.go:577] Response Headers:
	I0103 20:18:01.119209  478496 round_trippers.go:580]     Audit-Id: ae193048-8329-49a5-9d7a-909e61122b49
	I0103 20:18:01.119216  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:18:01.119222  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:18:01.119228  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:18:01.119240  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:18:01.119247  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:18:01 GMT
	I0103 20:18:01.119386  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"425","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0103 20:18:01.119791  478496 pod_ready.go:92] pod "etcd-multinode-004925" in "kube-system" namespace has status "Ready":"True"
	I0103 20:18:01.119811  478496 pod_ready.go:81] duration metric: took 6.112323ms waiting for pod "etcd-multinode-004925" in "kube-system" namespace to be "Ready" ...
	I0103 20:18:01.119830  478496 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-004925" in "kube-system" namespace to be "Ready" ...
	I0103 20:18:01.119902  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-004925
	I0103 20:18:01.119914  478496 round_trippers.go:469] Request Headers:
	I0103 20:18:01.119922  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:18:01.119931  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:18:01.122394  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:18:01.122416  478496 round_trippers.go:577] Response Headers:
	I0103 20:18:01.122424  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:18:01.122431  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:18:01 GMT
	I0103 20:18:01.122437  478496 round_trippers.go:580]     Audit-Id: 798aafdf-ff67-4035-9837-4bbf29337ff6
	I0103 20:18:01.122444  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:18:01.122453  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:18:01.122459  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:18:01.122633  478496 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-004925","namespace":"kube-system","uid":"7a543b23-069e-4da3-8d6d-c485af508606","resourceVersion":"318","creationTimestamp":"2024-01-03T20:16:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"a2578f205dcd3a65ab5244e64026e843","kubernetes.io/config.mirror":"a2578f205dcd3a65ab5244e64026e843","kubernetes.io/config.seen":"2024-01-03T20:16:19.858659699Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:16:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0103 20:18:01.123159  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:18:01.123176  478496 round_trippers.go:469] Request Headers:
	I0103 20:18:01.123191  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:18:01.123199  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:18:01.133154  478496 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0103 20:18:01.133180  478496 round_trippers.go:577] Response Headers:
	I0103 20:18:01.133190  478496 round_trippers.go:580]     Audit-Id: e0802b40-e552-4dba-8981-0fd2a9b2b163
	I0103 20:18:01.133196  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:18:01.133204  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:18:01.133210  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:18:01.133217  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:18:01.133223  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:18:01 GMT
	I0103 20:18:01.133540  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"425","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0103 20:18:01.133928  478496 pod_ready.go:92] pod "kube-apiserver-multinode-004925" in "kube-system" namespace has status "Ready":"True"
	I0103 20:18:01.133947  478496 pod_ready.go:81] duration metric: took 14.106557ms waiting for pod "kube-apiserver-multinode-004925" in "kube-system" namespace to be "Ready" ...
	I0103 20:18:01.133959  478496 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-004925" in "kube-system" namespace to be "Ready" ...
	I0103 20:18:01.134026  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-004925
	I0103 20:18:01.134035  478496 round_trippers.go:469] Request Headers:
	I0103 20:18:01.134042  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:18:01.134049  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:18:01.136623  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:18:01.136686  478496 round_trippers.go:577] Response Headers:
	I0103 20:18:01.136711  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:18:01 GMT
	I0103 20:18:01.136735  478496 round_trippers.go:580]     Audit-Id: 2034acdf-ebeb-4d24-b6f8-17ee30213df8
	I0103 20:18:01.136769  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:18:01.136795  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:18:01.136829  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:18:01.136883  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:18:01.137044  478496 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-004925","namespace":"kube-system","uid":"9e73201b-daa5-45ae-ab17-a0117f61c545","resourceVersion":"325","creationTimestamp":"2024-01-03T20:16:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"353654849794f26cfddf683d77aa8ece","kubernetes.io/config.mirror":"353654849794f26cfddf683d77aa8ece","kubernetes.io/config.seen":"2024-01-03T20:16:19.858661308Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:16:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0103 20:18:01.137588  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:18:01.137605  478496 round_trippers.go:469] Request Headers:
	I0103 20:18:01.137614  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:18:01.137621  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:18:01.140066  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:18:01.140091  478496 round_trippers.go:577] Response Headers:
	I0103 20:18:01.140101  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:18:01.140108  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:18:01.140115  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:18:01.140121  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:18:01 GMT
	I0103 20:18:01.140128  478496 round_trippers.go:580]     Audit-Id: a655fa2a-a68d-4cda-9cc6-f5cdadf832c0
	I0103 20:18:01.140134  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:18:01.140362  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"425","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0103 20:18:01.140789  478496 pod_ready.go:92] pod "kube-controller-manager-multinode-004925" in "kube-system" namespace has status "Ready":"True"
	I0103 20:18:01.140809  478496 pod_ready.go:81] duration metric: took 6.842063ms waiting for pod "kube-controller-manager-multinode-004925" in "kube-system" namespace to be "Ready" ...
	I0103 20:18:01.140821  478496 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dz4jl" in "kube-system" namespace to be "Ready" ...
	I0103 20:18:01.297391  478496 request.go:629] Waited for 156.499242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dz4jl
	I0103 20:18:01.297477  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dz4jl
	I0103 20:18:01.297486  478496 round_trippers.go:469] Request Headers:
	I0103 20:18:01.297496  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:18:01.297504  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:18:01.300157  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:18:01.300264  478496 round_trippers.go:577] Response Headers:
	I0103 20:18:01.300303  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:18:01.300329  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:18:01.300352  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:18:01.300374  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:18:01 GMT
	I0103 20:18:01.300404  478496 round_trippers.go:580]     Audit-Id: bc626fd8-1cc3-4de4-ba6a-814cfe7d848a
	I0103 20:18:01.300426  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:18:01.300566  478496 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dz4jl","generateName":"kube-proxy-","namespace":"kube-system","uid":"aa4b165f-582a-4c17-a00b-9552514c2006","resourceVersion":"412","creationTimestamp":"2024-01-03T20:16:40Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"295a82b4-4341-4501-ba93-f3574def778a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:16:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"295a82b4-4341-4501-ba93-f3574def778a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0103 20:18:01.497433  478496 request.go:629] Waited for 196.365388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:18:01.497511  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:18:01.497522  478496 round_trippers.go:469] Request Headers:
	I0103 20:18:01.497531  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:18:01.497538  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:18:01.500248  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:18:01.500290  478496 round_trippers.go:577] Response Headers:
	I0103 20:18:01.500301  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:18:01.500307  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:18:01.500314  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:18:01.500320  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:18:01 GMT
	I0103 20:18:01.500333  478496 round_trippers.go:580]     Audit-Id: 6127391c-9610-40ab-94d1-c48f3e37d734
	I0103 20:18:01.500345  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:18:01.500493  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"425","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0103 20:18:01.500903  478496 pod_ready.go:92] pod "kube-proxy-dz4jl" in "kube-system" namespace has status "Ready":"True"
	I0103 20:18:01.500921  478496 pod_ready.go:81] duration metric: took 360.092348ms waiting for pod "kube-proxy-dz4jl" in "kube-system" namespace to be "Ready" ...
	I0103 20:18:01.500934  478496 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wj6tj" in "kube-system" namespace to be "Ready" ...
	I0103 20:18:01.696690  478496 request.go:629] Waited for 195.666358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wj6tj
	I0103 20:18:01.696761  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wj6tj
	I0103 20:18:01.696772  478496 round_trippers.go:469] Request Headers:
	I0103 20:18:01.696785  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:18:01.696800  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:18:01.699412  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:18:01.699438  478496 round_trippers.go:577] Response Headers:
	I0103 20:18:01.699447  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:18:01.699453  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:18:01.699460  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:18:01 GMT
	I0103 20:18:01.699466  478496 round_trippers.go:580]     Audit-Id: 5c2ab140-47db-4f3c-81a1-8cee299ab437
	I0103 20:18:01.699472  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:18:01.699479  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:18:01.699600  478496 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wj6tj","generateName":"kube-proxy-","namespace":"kube-system","uid":"fbfd64d8-bf7f-4b35-bd06-0d470db398ed","resourceVersion":"495","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"295a82b4-4341-4501-ba93-f3574def778a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"295a82b4-4341-4501-ba93-f3574def778a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0103 20:18:01.897413  478496 request.go:629] Waited for 197.32076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:18:01.897469  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925-m02
	I0103 20:18:01.897474  478496 round_trippers.go:469] Request Headers:
	I0103 20:18:01.897483  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:18:01.897494  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:18:01.900091  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:18:01.900129  478496 round_trippers.go:577] Response Headers:
	I0103 20:18:01.900138  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:18:01.900144  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:18:01.900151  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:18:01.900158  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:18:01 GMT
	I0103 20:18:01.900164  478496 round_trippers.go:580]     Audit-Id: 25d7498c-0720-4709-b974-d820e3676835
	I0103 20:18:01.900180  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:18:01.900296  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925-m02","uid":"f889ebec-aa24-4ea4-865f-3bfad5255332","resourceVersion":"530","creationTimestamp":"2024-01-03T20:17:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_03T20_17_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:17:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5810 chars]
	I0103 20:18:01.900730  478496 pod_ready.go:92] pod "kube-proxy-wj6tj" in "kube-system" namespace has status "Ready":"True"
	I0103 20:18:01.900748  478496 pod_ready.go:81] duration metric: took 399.804165ms waiting for pod "kube-proxy-wj6tj" in "kube-system" namespace to be "Ready" ...
	I0103 20:18:01.900760  478496 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-004925" in "kube-system" namespace to be "Ready" ...
	I0103 20:18:02.097196  478496 request.go:629] Waited for 196.361688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-004925
	I0103 20:18:02.097286  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-004925
	I0103 20:18:02.097296  478496 round_trippers.go:469] Request Headers:
	I0103 20:18:02.097305  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:18:02.097315  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:18:02.099875  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:18:02.099943  478496 round_trippers.go:577] Response Headers:
	I0103 20:18:02.099967  478496 round_trippers.go:580]     Audit-Id: 90496ef9-9c21-4257-8c31-47c3aa6ac270
	I0103 20:18:02.099982  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:18:02.099989  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:18:02.099995  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:18:02.100017  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:18:02.100031  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:18:02 GMT
	I0103 20:18:02.100159  478496 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-004925","namespace":"kube-system","uid":"7eff8446-bd7f-47a5-9d38-4c8b87c1ddf1","resourceVersion":"322","creationTimestamp":"2024-01-03T20:16:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bba633dfeb251136b1652b1331b3b622","kubernetes.io/config.mirror":"bba633dfeb251136b1652b1331b3b622","kubernetes.io/config.seen":"2024-01-03T20:16:28.322944206Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-03T20:16:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0103 20:18:02.296960  478496 request.go:629] Waited for 196.364724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:18:02.297022  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-004925
	I0103 20:18:02.297027  478496 round_trippers.go:469] Request Headers:
	I0103 20:18:02.297037  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:18:02.297044  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:18:02.299632  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:18:02.299658  478496 round_trippers.go:577] Response Headers:
	I0103 20:18:02.299667  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:18:02.299674  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:18:02.299690  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:18:02.299698  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:18:02 GMT
	I0103 20:18:02.299704  478496 round_trippers.go:580]     Audit-Id: 5d9b63dd-45bb-42cb-b01c-a9e71262b994
	I0103 20:18:02.299711  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:18:02.299827  478496 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"425","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-03T20:16:24Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0103 20:18:02.300224  478496 pod_ready.go:92] pod "kube-scheduler-multinode-004925" in "kube-system" namespace has status "Ready":"True"
	I0103 20:18:02.300241  478496 pod_ready.go:81] duration metric: took 399.470861ms waiting for pod "kube-scheduler-multinode-004925" in "kube-system" namespace to be "Ready" ...
	I0103 20:18:02.300255  478496 pod_ready.go:38] duration metric: took 1.200558521s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:18:02.300273  478496 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:18:02.300329  478496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:18:02.314060  478496 system_svc.go:56] duration metric: took 13.779839ms WaitForService to wait for kubelet.
	I0103 20:18:02.314125  478496 kubeadm.go:581] duration metric: took 32.750515107s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:18:02.314151  478496 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:18:02.496712  478496 request.go:629] Waited for 182.467732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0103 20:18:02.496769  478496 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0103 20:18:02.496775  478496 round_trippers.go:469] Request Headers:
	I0103 20:18:02.496784  478496 round_trippers.go:473]     Accept: application/json, */*
	I0103 20:18:02.496796  478496 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0103 20:18:02.499470  478496 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0103 20:18:02.499498  478496 round_trippers.go:577] Response Headers:
	I0103 20:18:02.499507  478496 round_trippers.go:580]     Content-Type: application/json
	I0103 20:18:02.499514  478496 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2dd5fe00-4fef-4b9d-9976-ced013b24f01
	I0103 20:18:02.499520  478496 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9d1353a2-568a-4e94-914a-294122a8013a
	I0103 20:18:02.499527  478496 round_trippers.go:580]     Date: Wed, 03 Jan 2024 20:18:02 GMT
	I0103 20:18:02.499533  478496 round_trippers.go:580]     Audit-Id: ae44c858-6aaa-4bda-83eb-d9990ef29430
	I0103 20:18:02.499544  478496 round_trippers.go:580]     Cache-Control: no-cache, private
	I0103 20:18:02.499712  478496 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"531"},"items":[{"metadata":{"name":"multinode-004925","uid":"106a4418-70f9-4422-af71-2d3c896282a0","resourceVersion":"425","creationTimestamp":"2024-01-03T20:16:24Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-004925","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1b6a81cbc05f28310ff11df4170e79e2b8bf477a","minikube.k8s.io/name":"multinode-004925","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_03T20_16_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12884 chars]
	I0103 20:18:02.500408  478496 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0103 20:18:02.500427  478496 node_conditions.go:123] node cpu capacity is 2
	I0103 20:18:02.500437  478496 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0103 20:18:02.500442  478496 node_conditions.go:123] node cpu capacity is 2
	I0103 20:18:02.500447  478496 node_conditions.go:105] duration metric: took 186.289844ms to run NodePressure ...
	I0103 20:18:02.500457  478496 start.go:228] waiting for startup goroutines ...
	I0103 20:18:02.500484  478496 start.go:242] writing updated cluster config ...
	I0103 20:18:02.500825  478496 ssh_runner.go:195] Run: rm -f paused
	I0103 20:18:02.566639  478496 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0103 20:18:02.569561  478496 out.go:177] * Done! kubectl is now configured to use "multinode-004925" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 03 20:17:12 multinode-004925 crio[904]: time="2024-01-03 20:17:12.757425999Z" level=info msg="Starting container: 075e8c4365e75279ff364770cc770572a5d1971b00189c156db1260afc554a3c" id=ff71818c-e8f9-496e-9ff2-d53adef98e2a name=/runtime.v1.RuntimeService/StartContainer
	Jan 03 20:17:12 multinode-004925 crio[904]: time="2024-01-03 20:17:12.774653581Z" level=info msg="Started container" PID=1932 containerID=075e8c4365e75279ff364770cc770572a5d1971b00189c156db1260afc554a3c description=kube-system/storage-provisioner/storage-provisioner id=ff71818c-e8f9-496e-9ff2-d53adef98e2a name=/runtime.v1.RuntimeService/StartContainer sandboxID=836d95e1233c1602c8229ae0248d05a4957ab60775bbdba7fa4b39cceae6a785
	Jan 03 20:17:12 multinode-004925 crio[904]: time="2024-01-03 20:17:12.777600598Z" level=info msg="Created container e632b78202635bde5f3bcdb0f0465b025e727840305b477e87a5921fda854a95: kube-system/coredns-5dd5756b68-g2x92/coredns" id=e13ccdea-9b9f-4b08-b5de-f20351500ea8 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 03 20:17:12 multinode-004925 crio[904]: time="2024-01-03 20:17:12.780188767Z" level=info msg="Starting container: e632b78202635bde5f3bcdb0f0465b025e727840305b477e87a5921fda854a95" id=60242e60-6fb7-45e1-bcc8-f92b16d80a96 name=/runtime.v1.RuntimeService/StartContainer
	Jan 03 20:17:12 multinode-004925 crio[904]: time="2024-01-03 20:17:12.801657099Z" level=info msg="Started container" PID=1950 containerID=e632b78202635bde5f3bcdb0f0465b025e727840305b477e87a5921fda854a95 description=kube-system/coredns-5dd5756b68-g2x92/coredns id=60242e60-6fb7-45e1-bcc8-f92b16d80a96 name=/runtime.v1.RuntimeService/StartContainer sandboxID=af9af1ced41eb774caa62de95c5850852c45e65a60b61bc8d1c0dbfd5bb004c6
	Jan 03 20:18:03 multinode-004925 crio[904]: time="2024-01-03 20:18:03.821305428Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-fs9dz/POD" id=193e8d32-53eb-4ce0-b676-fb7df80fc19c name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 03 20:18:03 multinode-004925 crio[904]: time="2024-01-03 20:18:03.821372603Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 03 20:18:03 multinode-004925 crio[904]: time="2024-01-03 20:18:03.847074371Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-fs9dz Namespace:default ID:7b8a170f8c17e3555ea2eb595d56188a3deeade175fb8fec1a516feb5699478e UID:097b3f59-af79-4fe2-9ef8-a6198202a4d7 NetNS:/var/run/netns/36cb7518-9af0-4fbf-ab75-e53190df263c Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 03 20:18:03 multinode-004925 crio[904]: time="2024-01-03 20:18:03.847116775Z" level=info msg="Adding pod default_busybox-5bc68d56bd-fs9dz to CNI network \"kindnet\" (type=ptp)"
	Jan 03 20:18:03 multinode-004925 crio[904]: time="2024-01-03 20:18:03.858484649Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-fs9dz Namespace:default ID:7b8a170f8c17e3555ea2eb595d56188a3deeade175fb8fec1a516feb5699478e UID:097b3f59-af79-4fe2-9ef8-a6198202a4d7 NetNS:/var/run/netns/36cb7518-9af0-4fbf-ab75-e53190df263c Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 03 20:18:03 multinode-004925 crio[904]: time="2024-01-03 20:18:03.858715112Z" level=info msg="Checking pod default_busybox-5bc68d56bd-fs9dz for CNI network kindnet (type=ptp)"
	Jan 03 20:18:03 multinode-004925 crio[904]: time="2024-01-03 20:18:03.861229658Z" level=info msg="Ran pod sandbox 7b8a170f8c17e3555ea2eb595d56188a3deeade175fb8fec1a516feb5699478e with infra container: default/busybox-5bc68d56bd-fs9dz/POD" id=193e8d32-53eb-4ce0-b676-fb7df80fc19c name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 03 20:18:03 multinode-004925 crio[904]: time="2024-01-03 20:18:03.865144076Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=66428081-5d33-4690-bc6b-3d30655f2b00 name=/runtime.v1.ImageService/ImageStatus
	Jan 03 20:18:03 multinode-004925 crio[904]: time="2024-01-03 20:18:03.865363824Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=66428081-5d33-4690-bc6b-3d30655f2b00 name=/runtime.v1.ImageService/ImageStatus
	Jan 03 20:18:03 multinode-004925 crio[904]: time="2024-01-03 20:18:03.866174738Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=d8a8b662-b776-4937-8e5a-45283d16d3f5 name=/runtime.v1.ImageService/PullImage
	Jan 03 20:18:03 multinode-004925 crio[904]: time="2024-01-03 20:18:03.868697874Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 03 20:18:04 multinode-004925 crio[904]: time="2024-01-03 20:18:04.536001585Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 03 20:18:05 multinode-004925 crio[904]: time="2024-01-03 20:18:05.831588676Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=d8a8b662-b776-4937-8e5a-45283d16d3f5 name=/runtime.v1.ImageService/PullImage
	Jan 03 20:18:05 multinode-004925 crio[904]: time="2024-01-03 20:18:05.832818122Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=750fa6d5-7e5b-4bd1-8141-38bfb992afcf name=/runtime.v1.ImageService/ImageStatus
	Jan 03 20:18:05 multinode-004925 crio[904]: time="2024-01-03 20:18:05.833860918Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=750fa6d5-7e5b-4bd1-8141-38bfb992afcf name=/runtime.v1.ImageService/ImageStatus
	Jan 03 20:18:05 multinode-004925 crio[904]: time="2024-01-03 20:18:05.835067907Z" level=info msg="Creating container: default/busybox-5bc68d56bd-fs9dz/busybox" id=956f8bda-c224-4e61-ade0-bdb24d17e60f name=/runtime.v1.RuntimeService/CreateContainer
	Jan 03 20:18:05 multinode-004925 crio[904]: time="2024-01-03 20:18:05.835318965Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 03 20:18:05 multinode-004925 crio[904]: time="2024-01-03 20:18:05.896851117Z" level=info msg="Created container f5b0844894d1cfcc6fbc9ce6e6692473dafde7ef675744327e00750e8db96232: default/busybox-5bc68d56bd-fs9dz/busybox" id=956f8bda-c224-4e61-ade0-bdb24d17e60f name=/runtime.v1.RuntimeService/CreateContainer
	Jan 03 20:18:05 multinode-004925 crio[904]: time="2024-01-03 20:18:05.897626888Z" level=info msg="Starting container: f5b0844894d1cfcc6fbc9ce6e6692473dafde7ef675744327e00750e8db96232" id=71a41ad9-d10f-4a27-bcb5-26c3d6fa3db3 name=/runtime.v1.RuntimeService/StartContainer
	Jan 03 20:18:05 multinode-004925 crio[904]: time="2024-01-03 20:18:05.906246659Z" level=info msg="Started container" PID=2088 containerID=f5b0844894d1cfcc6fbc9ce6e6692473dafde7ef675744327e00750e8db96232 description=default/busybox-5bc68d56bd-fs9dz/busybox id=71a41ad9-d10f-4a27-bcb5-26c3d6fa3db3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7b8a170f8c17e3555ea2eb595d56188a3deeade175fb8fec1a516feb5699478e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f5b0844894d1c       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago        Running             busybox                   0                   7b8a170f8c17e       busybox-5bc68d56bd-fs9dz
	e632b78202635       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      58 seconds ago       Running             coredns                   0                   af9af1ced41eb       coredns-5dd5756b68-g2x92
	075e8c4365e75       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      58 seconds ago       Running             storage-provisioner       0                   836d95e1233c1       storage-provisioner
	1cddeddba3ea4       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                      About a minute ago   Running             kube-proxy                0                   b5e0df49d1b41       kube-proxy-dz4jl
	20f1d8c139955       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                      About a minute ago   Running             kindnet-cni               0                   b728af6f3c183       kindnet-stdx9
	0697f086eb96a       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                      About a minute ago   Running             kube-apiserver            0                   61beae96e209a       kube-apiserver-multinode-004925
	fcdb3b725b388       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                      About a minute ago   Running             kube-controller-manager   0                   be6dcfa8de09d       kube-controller-manager-multinode-004925
	9f241a730b69c       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                      About a minute ago   Running             kube-scheduler            0                   f50f8816301c6       kube-scheduler-multinode-004925
	8527be5cf916d       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   3f6574b8c90db       etcd-multinode-004925
	
	
	==> coredns [e632b78202635bde5f3bcdb0f0465b025e727840305b477e87a5921fda854a95] <==
	[INFO] 10.244.0.3:56180 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098674s
	[INFO] 10.244.1.2:60722 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127753s
	[INFO] 10.244.1.2:41299 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00105476s
	[INFO] 10.244.1.2:50653 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082436s
	[INFO] 10.244.1.2:34395 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000052619s
	[INFO] 10.244.1.2:33713 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000880271s
	[INFO] 10.244.1.2:45852 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072024s
	[INFO] 10.244.1.2:58041 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000691769s
	[INFO] 10.244.1.2:53326 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000163453s
	[INFO] 10.244.0.3:47375 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122092s
	[INFO] 10.244.0.3:38196 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081886s
	[INFO] 10.244.0.3:55415 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062154s
	[INFO] 10.244.0.3:47697 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058945s
	[INFO] 10.244.1.2:57923 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156947s
	[INFO] 10.244.1.2:51470 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092487s
	[INFO] 10.244.1.2:45709 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092397s
	[INFO] 10.244.1.2:50636 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007081s
	[INFO] 10.244.0.3:47275 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109546s
	[INFO] 10.244.0.3:37323 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000137771s
	[INFO] 10.244.0.3:53344 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104483s
	[INFO] 10.244.0.3:45184 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000088919s
	[INFO] 10.244.1.2:42085 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110243s
	[INFO] 10.244.1.2:60094 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000073887s
	[INFO] 10.244.1.2:37869 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082256s
	[INFO] 10.244.1.2:45789 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00006939s
	
	
	==> describe nodes <==
	Name:               multinode-004925
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-004925
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=multinode-004925
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_03T20_16_29_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 20:16:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-004925
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jan 2024 20:18:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 20:17:12 +0000   Wed, 03 Jan 2024 20:16:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 20:17:12 +0000   Wed, 03 Jan 2024 20:16:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 20:17:12 +0000   Wed, 03 Jan 2024 20:16:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 20:17:12 +0000   Wed, 03 Jan 2024 20:17:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-004925
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 c32e2801c07d46318eccea5aea3a0bc2
	  System UUID:                9d7a6dec-9198-483b-b629-15709c2b7f54
	  Boot ID:                    75f8dc93-969c-4083-a399-3fa01ac68612
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-fs9dz                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5dd5756b68-g2x92                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     91s
	  kube-system                 etcd-multinode-004925                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         103s
	  kube-system                 kindnet-stdx9                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      91s
	  kube-system                 kube-apiserver-multinode-004925             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-controller-manager-multinode-004925    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-proxy-dz4jl                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-scheduler-multinode-004925             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 89s                  kube-proxy       
	  Normal  Starting                 112s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  112s (x8 over 112s)  kubelet          Node multinode-004925 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s (x8 over 112s)  kubelet          Node multinode-004925 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s (x8 over 112s)  kubelet          Node multinode-004925 status is now: NodeHasSufficientPID
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s                 kubelet          Node multinode-004925 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s                 kubelet          Node multinode-004925 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s                 kubelet          Node multinode-004925 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           91s                  node-controller  Node multinode-004925 event: Registered Node multinode-004925 in Controller
	  Normal  NodeReady                59s                  kubelet          Node multinode-004925 status is now: NodeReady
	
	
	Name:               multinode-004925-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-004925-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=multinode-004925
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_03T20_17_29_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 20:17:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-004925-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jan 2024 20:18:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 20:18:00 +0000   Wed, 03 Jan 2024 20:17:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 20:18:00 +0000   Wed, 03 Jan 2024 20:17:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 20:18:00 +0000   Wed, 03 Jan 2024 20:17:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 20:18:00 +0000   Wed, 03 Jan 2024 20:18:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-004925-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 847a0555922245568b4d879219743683
	  System UUID:                0dd7fc0f-7ff3-4fe5-81cd-ab0b007a0348
	  Boot ID:                    75f8dc93-969c-4083-a399-3fa01ac68612
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-m75vn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-v2wwd               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      43s
	  kube-system                 kube-proxy-wj6tj            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 41s                kube-proxy       
	  Normal  NodeHasSufficientMemory  43s (x5 over 44s)  kubelet          Node multinode-004925-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x5 over 44s)  kubelet          Node multinode-004925-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x5 over 44s)  kubelet          Node multinode-004925-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                node-controller  Node multinode-004925-m02 event: Registered Node multinode-004925-m02 in Controller
	  Normal  NodeReady                11s                kubelet          Node multinode-004925-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001189] FS-Cache: O-key=[8] 'ccd1c90000000000'
	[  +0.000818] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.001059] FS-Cache: N-cookie d=000000008497ac2d{9p.inode} n=00000000a750ea4f
	[  +0.001301] FS-Cache: N-key=[8] 'ccd1c90000000000'
	[  +0.014646] FS-Cache: Duplicate cookie detected
	[  +0.000925] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001115] FS-Cache: O-cookie d=000000008497ac2d{9p.inode} n=00000000f7d3da5e
	[  +0.001218] FS-Cache: O-key=[8] 'ccd1c90000000000'
	[  +0.000824] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.001156] FS-Cache: N-cookie d=000000008497ac2d{9p.inode} n=00000000bc524ce4
	[  +0.001241] FS-Cache: N-key=[8] 'ccd1c90000000000'
	[  +2.760106] FS-Cache: Duplicate cookie detected
	[  +0.000784] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001116] FS-Cache: O-cookie d=000000008497ac2d{9p.inode} n=00000000ca9fc0f7
	[  +0.001225] FS-Cache: O-key=[8] 'cbd1c90000000000'
	[  +0.000783] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001000] FS-Cache: N-cookie d=000000008497ac2d{9p.inode} n=000000003725d1cd
	[  +0.001192] FS-Cache: N-key=[8] 'cbd1c90000000000'
	[  +0.402621] FS-Cache: Duplicate cookie detected
	[  +0.000828] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001155] FS-Cache: O-cookie d=000000008497ac2d{9p.inode} n=00000000458cff56
	[  +0.001202] FS-Cache: O-key=[8] 'd1d1c90000000000'
	[  +0.000836] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001046] FS-Cache: N-cookie d=000000008497ac2d{9p.inode} n=00000000263e5b2a
	[  +0.001184] FS-Cache: N-key=[8] 'd1d1c90000000000'
	
	
	==> etcd [8527be5cf916dd09ca957b2a44873cd5909b9c3b45f44d2a2b2de8350046010d] <==
	{"level":"info","ts":"2024-01-03T20:16:20.763191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2024-01-03T20:16:20.766654Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2024-01-03T20:16:20.767205Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-03T20:16:20.767469Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-03T20:16:20.797809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-03T20:16:20.79791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-03T20:16:20.797954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2024-01-03T20:16:20.797993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2024-01-03T20:16:20.798031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-03T20:16:20.79808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2024-01-03T20:16:20.798115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-03T20:16:20.801129Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T20:16:20.806715Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-004925 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-03T20:16:20.806791Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T20:16:20.807221Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T20:16:20.807346Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T20:16:20.807394Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T20:16:20.807442Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T20:16:20.808324Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2024-01-03T20:16:20.809167Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-03T20:16:20.834581Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-03T20:16:20.834689Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-03T20:16:41.466947Z","caller":"traceutil/trace.go:171","msg":"trace[765044321] transaction","detail":"{read_only:false; response_revision:387; number_of_response:1; }","duration":"124.376102ms","start":"2024-01-03T20:16:41.342493Z","end":"2024-01-03T20:16:41.466869Z","steps":["trace[765044321] 'process raft request'  (duration: 124.208859ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-03T20:16:41.470064Z","caller":"traceutil/trace.go:171","msg":"trace[487968629] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"130.465621ms","start":"2024-01-03T20:16:41.33958Z","end":"2024-01-03T20:16:41.470046Z","steps":["trace[487968629] 'process raft request'  (duration: 34.104246ms)","trace[487968629] 'compare'  (duration: 88.1946ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-03T20:16:41.470473Z","caller":"traceutil/trace.go:171","msg":"trace[799516881] transaction","detail":"{read_only:false; response_revision:386; number_of_response:1; }","duration":"128.690318ms","start":"2024-01-03T20:16:41.341648Z","end":"2024-01-03T20:16:41.470338Z","steps":["trace[799516881] 'process raft request'  (duration: 124.942866ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:18:11 up  2:00,  0 users,  load average: 1.43, 2.03, 2.10
	Linux multinode-004925 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [20f1d8c13995598c865d6ef2d7feb601f352c84bb9781f1ee1cadd2aec049a67] <==
	I0103 20:16:41.711034       1 main.go:116] setting mtu 1500 for CNI 
	I0103 20:16:41.716484       1 main.go:146] kindnetd IP family: "ipv4"
	I0103 20:16:41.716582       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0103 20:17:12.063467       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0103 20:17:12.077906       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0103 20:17:12.077939       1 main.go:227] handling current node
	I0103 20:17:22.100990       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0103 20:17:22.101107       1 main.go:227] handling current node
	I0103 20:17:32.107526       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0103 20:17:32.107556       1 main.go:227] handling current node
	I0103 20:17:32.107567       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0103 20:17:32.107573       1 main.go:250] Node multinode-004925-m02 has CIDR [10.244.1.0/24] 
	I0103 20:17:32.107722       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0103 20:17:42.117459       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0103 20:17:42.117493       1 main.go:227] handling current node
	I0103 20:17:42.117506       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0103 20:17:42.117515       1 main.go:250] Node multinode-004925-m02 has CIDR [10.244.1.0/24] 
	I0103 20:17:52.130802       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0103 20:17:52.130830       1 main.go:227] handling current node
	I0103 20:17:52.130841       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0103 20:17:52.130846       1 main.go:250] Node multinode-004925-m02 has CIDR [10.244.1.0/24] 
	I0103 20:18:02.143738       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0103 20:18:02.143766       1 main.go:227] handling current node
	I0103 20:18:02.143776       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0103 20:18:02.143782       1 main.go:250] Node multinode-004925-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [0697f086eb96a9ef7daccbd103c6c4cf9d02b28f9c97f681ca20efc9ed793bf8] <==
	I0103 20:16:24.938570       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0103 20:16:24.952665       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0103 20:16:24.952698       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0103 20:16:24.962637       1 cache.go:39] Caches are synced for autoregister controller
	I0103 20:16:24.964501       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0103 20:16:24.964716       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0103 20:16:24.965250       1 shared_informer.go:318] Caches are synced for configmaps
	I0103 20:16:24.975695       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0103 20:16:25.757902       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0103 20:16:25.762625       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0103 20:16:25.762647       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0103 20:16:26.334218       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0103 20:16:26.377099       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0103 20:16:26.490238       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0103 20:16:26.498605       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0103 20:16:26.500095       1 controller.go:624] quota admission added evaluator for: endpoints
	I0103 20:16:26.509005       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0103 20:16:26.935727       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0103 20:16:28.240987       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0103 20:16:28.252335       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0103 20:16:28.263514       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0103 20:16:40.746725       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0103 20:16:40.831242       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E0103 20:18:08.598199       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.58.2:33096->192.168.58.3:10250: write: broken pipe
	E0103 20:18:09.771773       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.58.2:8443->192.168.58.1:45784: read: connection reset by peer
	
	
	==> kube-controller-manager [fcdb3b725b388a1511f0193f13f6c86f71c48c5ba81830c8ca350bbf86073e36] <==
	I0103 20:16:41.734342       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="42.519µs"
	I0103 20:16:41.734605       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="31.508µs"
	I0103 20:17:12.348484       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.797µs"
	I0103 20:17:12.365551       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.934µs"
	I0103 20:17:13.638405       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.504428ms"
	I0103 20:17:13.638483       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="40.967µs"
	I0103 20:17:15.809745       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0103 20:17:28.787012       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-004925-m02\" does not exist"
	I0103 20:17:28.802974       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-004925-m02" podCIDRs=["10.244.1.0/24"]
	I0103 20:17:28.822705       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-v2wwd"
	I0103 20:17:28.842074       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wj6tj"
	I0103 20:17:30.812682       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-004925-m02"
	I0103 20:17:30.812753       1 event.go:307] "Event occurred" object="multinode-004925-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-004925-m02 event: Registered Node multinode-004925-m02 in Controller"
	I0103 20:18:00.616496       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-004925-m02"
	I0103 20:18:03.475412       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0103 20:18:03.490003       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-m75vn"
	I0103 20:18:03.502142       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-fs9dz"
	I0103 20:18:03.526779       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="52.406908ms"
	I0103 20:18:03.556436       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="29.591563ms"
	I0103 20:18:03.556521       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="46.924µs"
	I0103 20:18:03.562267       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="43.585µs"
	I0103 20:18:06.463901       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.258135ms"
	I0103 20:18:06.464604       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="33.566µs"
	I0103 20:18:06.726820       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="14.927145ms"
	I0103 20:18:06.727923       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="37.013µs"
	
	
	==> kube-proxy [1cddeddba3ea48151f21cdb0a181fc6e926c8fc38f14fc5b682929acd8bdda45] <==
	I0103 20:16:42.177667       1 server_others.go:69] "Using iptables proxy"
	I0103 20:16:42.207996       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0103 20:16:42.331961       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0103 20:16:42.338749       1 server_others.go:152] "Using iptables Proxier"
	I0103 20:16:42.338863       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0103 20:16:42.338898       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0103 20:16:42.338988       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0103 20:16:42.357930       1 server.go:846] "Version info" version="v1.28.4"
	I0103 20:16:42.358030       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 20:16:42.373710       1 config.go:188] "Starting service config controller"
	I0103 20:16:42.373754       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0103 20:16:42.373787       1 config.go:97] "Starting endpoint slice config controller"
	I0103 20:16:42.373792       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0103 20:16:42.374561       1 config.go:315] "Starting node config controller"
	I0103 20:16:42.374584       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0103 20:16:42.474701       1 shared_informer.go:318] Caches are synced for node config
	I0103 20:16:42.474747       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0103 20:16:42.474714       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [9f241a730b69c743f773f25fa7044384c36e2f311543451ceb3ecb3522b5d219] <==
	W0103 20:16:24.950595       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0103 20:16:24.950608       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0103 20:16:24.950676       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0103 20:16:24.950686       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0103 20:16:24.950749       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0103 20:16:24.950759       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0103 20:16:24.950832       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0103 20:16:24.950844       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0103 20:16:24.950910       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0103 20:16:24.950921       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0103 20:16:25.805031       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0103 20:16:25.805151       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0103 20:16:25.860493       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0103 20:16:25.860622       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0103 20:16:25.883477       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0103 20:16:25.883512       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0103 20:16:25.954130       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0103 20:16:25.954254       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0103 20:16:25.983765       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0103 20:16:25.983913       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0103 20:16:26.003257       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0103 20:16:26.003363       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0103 20:16:26.049513       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0103 20:16:26.049557       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0103 20:16:28.402877       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 03 20:16:40 multinode-004925 kubelet[1395]: I0103 20:16:40.913221    1395 topology_manager.go:215] "Topology Admit Handler" podUID="9371f41f-cf0e-4412-a9cc-aef70db86495" podNamespace="kube-system" podName="kindnet-stdx9"
	Jan 03 20:16:40 multinode-004925 kubelet[1395]: I0103 20:16:40.930243    1395 topology_manager.go:215] "Topology Admit Handler" podUID="aa4b165f-582a-4c17-a00b-9552514c2006" podNamespace="kube-system" podName="kube-proxy-dz4jl"
	Jan 03 20:16:40 multinode-004925 kubelet[1395]: I0103 20:16:40.946936    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9371f41f-cf0e-4412-a9cc-aef70db86495-lib-modules\") pod \"kindnet-stdx9\" (UID: \"9371f41f-cf0e-4412-a9cc-aef70db86495\") " pod="kube-system/kindnet-stdx9"
	Jan 03 20:16:40 multinode-004925 kubelet[1395]: I0103 20:16:40.946988    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa4b165f-582a-4c17-a00b-9552514c2006-lib-modules\") pod \"kube-proxy-dz4jl\" (UID: \"aa4b165f-582a-4c17-a00b-9552514c2006\") " pod="kube-system/kube-proxy-dz4jl"
	Jan 03 20:16:40 multinode-004925 kubelet[1395]: I0103 20:16:40.947015    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5tp5\" (UniqueName: \"kubernetes.io/projected/aa4b165f-582a-4c17-a00b-9552514c2006-kube-api-access-n5tp5\") pod \"kube-proxy-dz4jl\" (UID: \"aa4b165f-582a-4c17-a00b-9552514c2006\") " pod="kube-system/kube-proxy-dz4jl"
	Jan 03 20:16:40 multinode-004925 kubelet[1395]: I0103 20:16:40.947039    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9371f41f-cf0e-4412-a9cc-aef70db86495-xtables-lock\") pod \"kindnet-stdx9\" (UID: \"9371f41f-cf0e-4412-a9cc-aef70db86495\") " pod="kube-system/kindnet-stdx9"
	Jan 03 20:16:40 multinode-004925 kubelet[1395]: I0103 20:16:40.947066    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa4b165f-582a-4c17-a00b-9552514c2006-xtables-lock\") pod \"kube-proxy-dz4jl\" (UID: \"aa4b165f-582a-4c17-a00b-9552514c2006\") " pod="kube-system/kube-proxy-dz4jl"
	Jan 03 20:16:40 multinode-004925 kubelet[1395]: I0103 20:16:40.947090    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9371f41f-cf0e-4412-a9cc-aef70db86495-cni-cfg\") pod \"kindnet-stdx9\" (UID: \"9371f41f-cf0e-4412-a9cc-aef70db86495\") " pod="kube-system/kindnet-stdx9"
	Jan 03 20:16:40 multinode-004925 kubelet[1395]: I0103 20:16:40.947112    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa4b165f-582a-4c17-a00b-9552514c2006-kube-proxy\") pod \"kube-proxy-dz4jl\" (UID: \"aa4b165f-582a-4c17-a00b-9552514c2006\") " pod="kube-system/kube-proxy-dz4jl"
	Jan 03 20:16:40 multinode-004925 kubelet[1395]: I0103 20:16:40.947140    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pz8x\" (UniqueName: \"kubernetes.io/projected/9371f41f-cf0e-4412-a9cc-aef70db86495-kube-api-access-7pz8x\") pod \"kindnet-stdx9\" (UID: \"9371f41f-cf0e-4412-a9cc-aef70db86495\") " pod="kube-system/kindnet-stdx9"
	Jan 03 20:16:41 multinode-004925 kubelet[1395]: W0103 20:16:41.269643    1395 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a8b5d16b1951a483b6aef678c3a9e4af0d01efc76e4ab6e72a5f853aa60f6da3/crio-b728af6f3c183b100aca5f97d634aaa79627184d1a9e8bde428fc7d25e9c1694 WatchSource:0}: Error finding container b728af6f3c183b100aca5f97d634aaa79627184d1a9e8bde428fc7d25e9c1694: Status 404 returned error can't find the container with id b728af6f3c183b100aca5f97d634aaa79627184d1a9e8bde428fc7d25e9c1694
	Jan 03 20:16:42 multinode-004925 kubelet[1395]: I0103 20:16:42.564666    1395 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-stdx9" podStartSLOduration=2.5646178219999998 podCreationTimestamp="2024-01-03 20:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-03 20:16:41.633938487 +0000 UTC m=+13.427148380" watchObservedRunningTime="2024-01-03 20:16:42.564617822 +0000 UTC m=+14.357827707"
	Jan 03 20:17:12 multinode-004925 kubelet[1395]: I0103 20:17:12.308822    1395 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 03 20:17:12 multinode-004925 kubelet[1395]: I0103 20:17:12.334868    1395 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-dz4jl" podStartSLOduration=32.334828345 podCreationTimestamp="2024-01-03 20:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-03 20:16:42.568315182 +0000 UTC m=+14.361525075" watchObservedRunningTime="2024-01-03 20:17:12.334828345 +0000 UTC m=+44.128038230"
	Jan 03 20:17:12 multinode-004925 kubelet[1395]: I0103 20:17:12.335111    1395 topology_manager.go:215] "Topology Admit Handler" podUID="47ba7e03-3ba6-4a93-80c0-6ff32c31f9bb" podNamespace="kube-system" podName="storage-provisioner"
	Jan 03 20:17:12 multinode-004925 kubelet[1395]: I0103 20:17:12.341191    1395 topology_manager.go:215] "Topology Admit Handler" podUID="f982667b-3ee3-4aaa-9b63-2bee4f32be8f" podNamespace="kube-system" podName="coredns-5dd5756b68-g2x92"
	Jan 03 20:17:12 multinode-004925 kubelet[1395]: I0103 20:17:12.395622    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f982667b-3ee3-4aaa-9b63-2bee4f32be8f-config-volume\") pod \"coredns-5dd5756b68-g2x92\" (UID: \"f982667b-3ee3-4aaa-9b63-2bee4f32be8f\") " pod="kube-system/coredns-5dd5756b68-g2x92"
	Jan 03 20:17:12 multinode-004925 kubelet[1395]: I0103 20:17:12.395682    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/47ba7e03-3ba6-4a93-80c0-6ff32c31f9bb-tmp\") pod \"storage-provisioner\" (UID: \"47ba7e03-3ba6-4a93-80c0-6ff32c31f9bb\") " pod="kube-system/storage-provisioner"
	Jan 03 20:17:12 multinode-004925 kubelet[1395]: I0103 20:17:12.395711    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhcp7\" (UniqueName: \"kubernetes.io/projected/47ba7e03-3ba6-4a93-80c0-6ff32c31f9bb-kube-api-access-qhcp7\") pod \"storage-provisioner\" (UID: \"47ba7e03-3ba6-4a93-80c0-6ff32c31f9bb\") " pod="kube-system/storage-provisioner"
	Jan 03 20:17:12 multinode-004925 kubelet[1395]: I0103 20:17:12.395739    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmjtq\" (UniqueName: \"kubernetes.io/projected/f982667b-3ee3-4aaa-9b63-2bee4f32be8f-kube-api-access-jmjtq\") pod \"coredns-5dd5756b68-g2x92\" (UID: \"f982667b-3ee3-4aaa-9b63-2bee4f32be8f\") " pod="kube-system/coredns-5dd5756b68-g2x92"
	Jan 03 20:17:13 multinode-004925 kubelet[1395]: I0103 20:17:13.626185    1395 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.626142767 podCreationTimestamp="2024-01-03 20:16:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-03 20:17:13.614268973 +0000 UTC m=+45.407478866" watchObservedRunningTime="2024-01-03 20:17:13.626142767 +0000 UTC m=+45.419352651"
	Jan 03 20:18:03 multinode-004925 kubelet[1395]: I0103 20:18:03.519707    1395 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-g2x92" podStartSLOduration=83.51966423 podCreationTimestamp="2024-01-03 20:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-03 20:17:13.627233694 +0000 UTC m=+45.420443587" watchObservedRunningTime="2024-01-03 20:18:03.51966423 +0000 UTC m=+95.312874123"
	Jan 03 20:18:03 multinode-004925 kubelet[1395]: I0103 20:18:03.520051    1395 topology_manager.go:215] "Topology Admit Handler" podUID="097b3f59-af79-4fe2-9ef8-a6198202a4d7" podNamespace="default" podName="busybox-5bc68d56bd-fs9dz"
	Jan 03 20:18:03 multinode-004925 kubelet[1395]: I0103 20:18:03.541710    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v7p5\" (UniqueName: \"kubernetes.io/projected/097b3f59-af79-4fe2-9ef8-a6198202a4d7-kube-api-access-2v7p5\") pod \"busybox-5bc68d56bd-fs9dz\" (UID: \"097b3f59-af79-4fe2-9ef8-a6198202a4d7\") " pod="default/busybox-5bc68d56bd-fs9dz"
	Jan 03 20:18:03 multinode-004925 kubelet[1395]: W0103 20:18:03.861736    1395 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a8b5d16b1951a483b6aef678c3a9e4af0d01efc76e4ab6e72a5f853aa60f6da3/crio-7b8a170f8c17e3555ea2eb595d56188a3deeade175fb8fec1a516feb5699478e WatchSource:0}: Error finding container 7b8a170f8c17e3555ea2eb595d56188a3deeade175fb8fec1a516feb5699478e: Status 404 returned error can't find the container with id 7b8a170f8c17e3555ea2eb595d56188a3deeade175fb8fec1a516feb5699478e
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-004925 -n multinode-004925
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-004925 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.24s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (81.16s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.1118473119.exe start -p running-upgrade-251987 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0103 20:34:05.328872  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
E0103 20:34:11.513776  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.1118473119.exe start -p running-upgrade-251987 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m12.378027438s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-251987 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-251987 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (3.575087047s)

                                                
                                                
-- stdout --
	* [running-upgrade-251987] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-251987 in cluster running-upgrade-251987
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Updating the running docker "running-upgrade-251987" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 20:35:11.229871  538821 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:35:11.230094  538821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:35:11.230122  538821 out.go:309] Setting ErrFile to fd 2...
	I0103 20:35:11.230142  538821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:35:11.230471  538821 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-409390/.minikube/bin
	I0103 20:35:11.230960  538821 out.go:303] Setting JSON to false
	I0103 20:35:11.232136  538821 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":8261,"bootTime":1704305851,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0103 20:35:11.232260  538821 start.go:138] virtualization:  
	I0103 20:35:11.234929  538821 out.go:177] * [running-upgrade-251987] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0103 20:35:11.237428  538821 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 20:35:11.239344  538821 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 20:35:11.237574  538821 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0103 20:35:11.237607  538821 notify.go:220] Checking for updates...
	I0103 20:35:11.242167  538821 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 20:35:11.243962  538821 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube
	I0103 20:35:11.246191  538821 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0103 20:35:11.248144  538821 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 20:35:11.250579  538821 config.go:182] Loaded profile config "running-upgrade-251987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0103 20:35:11.253185  538821 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0103 20:35:11.254986  538821 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 20:35:11.288628  538821 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 20:35:11.288735  538821 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 20:35:11.416147  538821 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:54 SystemTime:2024-01-03 20:35:11.404532716 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 20:35:11.416257  538821 docker.go:295] overlay module found
	I0103 20:35:11.418246  538821 out.go:177] * Using the docker driver based on existing profile
	I0103 20:35:11.419896  538821 start.go:298] selected driver: docker
	I0103 20:35:11.419914  538821 start.go:902] validating driver "docker" against &{Name:running-upgrade-251987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-251987 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.84 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0103 20:35:11.420014  538821 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 20:35:11.420626  538821 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 20:35:11.426628  538821 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I0103 20:35:11.493979  538821 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:54 SystemTime:2024-01-03 20:35:11.483429307 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 20:35:11.494435  538821 cni.go:84] Creating CNI manager for ""
	I0103 20:35:11.494451  538821 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 20:35:11.494463  538821 start_flags.go:323] config:
	{Name:running-upgrade-251987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-251987 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.84 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0103 20:35:11.497485  538821 out.go:177] * Starting control plane node running-upgrade-251987 in cluster running-upgrade-251987
	I0103 20:35:11.499205  538821 cache.go:121] Beginning downloading kic base image for docker with crio
	I0103 20:35:11.501254  538821 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0103 20:35:11.502823  538821 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0103 20:35:11.502922  538821 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0103 20:35:11.522506  538821 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0103 20:35:11.522574  538821 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0103 20:35:11.591904  538821 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0103 20:35:11.592071  538821 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/running-upgrade-251987/config.json ...
	I0103 20:35:11.592330  538821 cache.go:194] Successfully downloaded all kic artifacts
	I0103 20:35:11.592378  538821 start.go:365] acquiring machines lock for running-upgrade-251987: {Name:mk4a37264c6cc17267c540c64db50230fa494b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:35:11.592440  538821 start.go:369] acquired machines lock for "running-upgrade-251987" in 34.543µs
	I0103 20:35:11.592457  538821 start.go:96] Skipping create...Using existing machine configuration
	I0103 20:35:11.592466  538821 fix.go:54] fixHost starting: 
	I0103 20:35:11.592764  538821 cli_runner.go:164] Run: docker container inspect running-upgrade-251987 --format={{.State.Status}}
	I0103 20:35:11.592981  538821 cache.go:107] acquiring lock: {Name:mk78d87b8ba8b51b681a7c0163fc10f10a5ff4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:35:11.593046  538821 cache.go:115] /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0103 20:35:11.593060  538821 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 82.124µs
	I0103 20:35:11.593073  538821 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0103 20:35:11.593082  538821 cache.go:107] acquiring lock: {Name:mka55c36f2c1ee731e00cdb772546de2b15db0fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:35:11.593118  538821 cache.go:115] /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0103 20:35:11.593127  538821 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 45.94µs
	I0103 20:35:11.593135  538821 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0103 20:35:11.593144  538821 cache.go:107] acquiring lock: {Name:mk50a92db3eea7005bb38171fdab6100907d689b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:35:11.593173  538821 cache.go:115] /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0103 20:35:11.593184  538821 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 40.36µs
	I0103 20:35:11.593192  538821 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0103 20:35:11.593207  538821 cache.go:107] acquiring lock: {Name:mk931f5c48fff6b3aa8eefdb1c5e4d81001db2c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:35:11.593236  538821 cache.go:115] /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0103 20:35:11.593244  538821 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 37.743µs
	I0103 20:35:11.593253  538821 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0103 20:35:11.593266  538821 cache.go:107] acquiring lock: {Name:mk5848a21617272537681ae8c4b87f10f3fde221 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:35:11.593302  538821 cache.go:115] /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0103 20:35:11.593312  538821 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 46.12µs
	I0103 20:35:11.593319  538821 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0103 20:35:11.593333  538821 cache.go:107] acquiring lock: {Name:mk2c4870517c3a4fedad9d87517a6219ab69b98f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:35:11.593363  538821 cache.go:115] /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0103 20:35:11.593371  538821 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 39.877µs
	I0103 20:35:11.593378  538821 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0103 20:35:11.593386  538821 cache.go:107] acquiring lock: {Name:mkde64ff2c3d0eb724a11f7867644cecbc61610f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:35:11.593411  538821 cache.go:115] /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0103 20:35:11.593415  538821 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 30.425µs
	I0103 20:35:11.593421  538821 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0103 20:35:11.593434  538821 cache.go:107] acquiring lock: {Name:mkbde15802b62fcbea1b27c9517898b77c941f0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:35:11.593469  538821 cache.go:115] /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0103 20:35:11.593478  538821 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 44.553µs
	I0103 20:35:11.593484  538821 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0103 20:35:11.593490  538821 cache.go:87] Successfully saved all images to host disk.
	I0103 20:35:11.611982  538821 fix.go:102] recreateIfNeeded on running-upgrade-251987: state=Running err=<nil>
	W0103 20:35:11.612015  538821 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 20:35:11.614114  538821 out.go:177] * Updating the running docker "running-upgrade-251987" container ...
	I0103 20:35:11.615958  538821 machine.go:88] provisioning docker machine ...
	I0103 20:35:11.615985  538821 ubuntu.go:169] provisioning hostname "running-upgrade-251987"
	I0103 20:35:11.616064  538821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-251987
	I0103 20:35:11.636187  538821 main.go:141] libmachine: Using SSH client type: native
	I0103 20:35:11.636685  538821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33289 <nil> <nil>}
	I0103 20:35:11.636706  538821 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-251987 && echo "running-upgrade-251987" | sudo tee /etc/hostname
	I0103 20:35:11.793883  538821 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-251987
	
	I0103 20:35:11.793963  538821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-251987
	I0103 20:35:11.812921  538821 main.go:141] libmachine: Using SSH client type: native
	I0103 20:35:11.813356  538821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33289 <nil> <nil>}
	I0103 20:35:11.813381  538821 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-251987' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-251987/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-251987' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:35:11.956406  538821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:35:11.956429  538821 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17885-409390/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-409390/.minikube}
	I0103 20:35:11.956456  538821 ubuntu.go:177] setting up certificates
	I0103 20:35:11.956480  538821 provision.go:83] configureAuth start
	I0103 20:35:11.956543  538821 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-251987
	I0103 20:35:11.979454  538821 provision.go:138] copyHostCerts
	I0103 20:35:11.979631  538821 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem, removing ...
	I0103 20:35:11.979645  538821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem
	I0103 20:35:11.979793  538821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem (1679 bytes)
	I0103 20:35:11.980115  538821 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem, removing ...
	I0103 20:35:11.980127  538821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem
	I0103 20:35:11.980218  538821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem (1078 bytes)
	I0103 20:35:11.980398  538821 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem, removing ...
	I0103 20:35:11.980409  538821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem
	I0103 20:35:11.980487  538821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem (1123 bytes)
	I0103 20:35:11.980626  538821 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-251987 san=[192.168.70.84 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-251987]
	I0103 20:35:12.183753  538821 provision.go:172] copyRemoteCerts
	I0103 20:35:12.183828  538821 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:35:12.183880  538821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-251987
	I0103 20:35:12.209273  538821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/running-upgrade-251987/id_rsa Username:docker}
	I0103 20:35:12.308340  538821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:35:12.332335  538821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0103 20:35:12.356394  538821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 20:35:12.381081  538821 provision.go:86] duration metric: configureAuth took 424.572183ms
	I0103 20:35:12.381167  538821 ubuntu.go:193] setting minikube options for container-runtime
	I0103 20:35:12.381384  538821 config.go:182] Loaded profile config "running-upgrade-251987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0103 20:35:12.381523  538821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-251987
	I0103 20:35:12.402603  538821 main.go:141] libmachine: Using SSH client type: native
	I0103 20:35:12.403006  538821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33289 <nil> <nil>}
	I0103 20:35:12.403029  538821 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:35:12.960688  538821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:35:12.960711  538821 machine.go:91] provisioned docker machine in 1.344735593s
	I0103 20:35:12.960722  538821 start.go:300] post-start starting for "running-upgrade-251987" (driver="docker")
	I0103 20:35:12.960733  538821 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:35:12.960800  538821 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:35:12.960849  538821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-251987
	I0103 20:35:12.981036  538821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/running-upgrade-251987/id_rsa Username:docker}
	I0103 20:35:13.085716  538821 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:35:13.089687  538821 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0103 20:35:13.089715  538821 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0103 20:35:13.089726  538821 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0103 20:35:13.089734  538821 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0103 20:35:13.089744  538821 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-409390/.minikube/addons for local assets ...
	I0103 20:35:13.089805  538821 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-409390/.minikube/files for local assets ...
	I0103 20:35:13.089908  538821 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem -> 4147632.pem in /etc/ssl/certs
	I0103 20:35:13.090017  538821 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:35:13.098976  538821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem --> /etc/ssl/certs/4147632.pem (1708 bytes)
	I0103 20:35:13.122660  538821 start.go:303] post-start completed in 161.921547ms
	I0103 20:35:13.122827  538821 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 20:35:13.122873  538821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-251987
	I0103 20:35:13.142711  538821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/running-upgrade-251987/id_rsa Username:docker}
	I0103 20:35:13.245948  538821 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0103 20:35:13.253045  538821 fix.go:56] fixHost completed within 1.660559435s
	I0103 20:35:13.253084  538821 start.go:83] releasing machines lock for "running-upgrade-251987", held for 1.660632599s
	I0103 20:35:13.253178  538821 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-251987
	I0103 20:35:13.276784  538821 ssh_runner.go:195] Run: cat /version.json
	I0103 20:35:13.276801  538821 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:35:13.276836  538821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-251987
	I0103 20:35:13.276852  538821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-251987
	I0103 20:35:13.321436  538821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/running-upgrade-251987/id_rsa Username:docker}
	I0103 20:35:13.327552  538821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/running-upgrade-251987/id_rsa Username:docker}
	W0103 20:35:13.439467  538821 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0103 20:35:13.439573  538821 ssh_runner.go:195] Run: systemctl --version
	I0103 20:35:13.571463  538821 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:35:13.725590  538821 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 20:35:13.733275  538821 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:35:13.768301  538821 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0103 20:35:13.768492  538821 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:35:13.800071  538821 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:35:13.800140  538821 start.go:475] detecting cgroup driver to use...
	I0103 20:35:13.800188  538821 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 20:35:13.800283  538821 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:35:13.834493  538821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:35:13.851227  538821 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:35:13.851307  538821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:35:13.865942  538821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:35:13.881801  538821 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0103 20:35:13.901849  538821 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0103 20:35:13.901932  538821 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:35:14.113971  538821 docker.go:219] disabling docker service ...
	I0103 20:35:14.114047  538821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:35:14.142689  538821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:35:14.169915  538821 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:35:14.420910  538821 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:35:14.678897  538821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:35:14.692336  538821 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:35:14.710488  538821 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0103 20:35:14.710597  538821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:35:14.723613  538821 out.go:177] 
	W0103 20:35:14.725187  538821 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0103 20:35:14.725205  538821 out.go:239] * 
	* 
	W0103 20:35:14.726408  538821 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0103 20:35:14.728550  538821 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-251987 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2024-01-03 20:35:14.756993018 +0000 UTC m=+2581.405736354
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-251987
helpers_test.go:235: (dbg) docker inspect running-upgrade-251987:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b6f4b46abf013d6d9c7d4f229532fc3194658bb3bb9573101362f27dca248f89",
	        "Created": "2024-01-03T20:34:23.984627488Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 535388,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-03T20:34:24.571184449Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/b6f4b46abf013d6d9c7d4f229532fc3194658bb3bb9573101362f27dca248f89/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b6f4b46abf013d6d9c7d4f229532fc3194658bb3bb9573101362f27dca248f89/hostname",
	        "HostsPath": "/var/lib/docker/containers/b6f4b46abf013d6d9c7d4f229532fc3194658bb3bb9573101362f27dca248f89/hosts",
	        "LogPath": "/var/lib/docker/containers/b6f4b46abf013d6d9c7d4f229532fc3194658bb3bb9573101362f27dca248f89/b6f4b46abf013d6d9c7d4f229532fc3194658bb3bb9573101362f27dca248f89-json.log",
	        "Name": "/running-upgrade-251987",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-251987:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-251987",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/33b49f5f50a463e947d633f69d0b37b5c89ada14b04bf060cd83351ad6c4fc78-init/diff:/var/lib/docker/overlay2/5c46a0427ca2181b5ac31b2de22e30ddfa257e8bc7be71ff0c2fcd0f54cebea3/diff:/var/lib/docker/overlay2/e1835a24a72a1978eb7cbb8323828a0fe8e48c243d56aca4e413f12ea48ea255/diff:/var/lib/docker/overlay2/409e5c5c6933761ba8d9f050bd7de867564521ee89d6a1bdaf5b2f62ae6e7158/diff:/var/lib/docker/overlay2/820604d4b926dd4f75e58277782c265dd8895ff5f633bcd94584758d08794e0a/diff:/var/lib/docker/overlay2/e19523e9de46639d6c665eec5919a033fe7c37c8dcaccb0455d61577cbf80a0e/diff:/var/lib/docker/overlay2/4c64dc42027edc862502bf42d025625034b2854396a8efd33ab6694ba05af955/diff:/var/lib/docker/overlay2/2ab2919c9604e2b3bf91cb64953912f3abd04a5349876668fb93668a352e524c/diff:/var/lib/docker/overlay2/8ed6eda7b5fc73a75bfc84a6e97dc39dcd861a595a3eebe20ad0fb040baf8f69/diff:/var/lib/docker/overlay2/5200182f2fa6d6c45db9dc7ba7e32c54e76a4517456151a5d2c25f4494d472ad/diff:/var/lib/docker/overlay2/589865
86f6bf7b334b759da2b98e6d00caa41b66d3a71dd2ea81da70ff3dc8ee/diff:/var/lib/docker/overlay2/67a395ccff492a628dd49ad0c897ac89df5ae8db0ec35983271040b67abdddb4/diff:/var/lib/docker/overlay2/52a6babc5c264a36ef7dd70bfe848c02f705030b01fd22384337a8d6aa808b70/diff:/var/lib/docker/overlay2/96330eb3a055ccde647fecf62841ccf9351b905211cc3aa275b80ff1d697e17c/diff:/var/lib/docker/overlay2/50f7a390e99f2434bdd55db4b84f6ed636ad7086ea16c5763b7bd02a20a4130d/diff:/var/lib/docker/overlay2/58105a736c8da0366743732cbc133c5e24b7a6579e0d77d833d2dcb0fd5f45d1/diff:/var/lib/docker/overlay2/e91a396297ce1686b01f535693f8e8c364df4637368716fc87af37a2f10fbff2/diff:/var/lib/docker/overlay2/77b168f3f6f2e5c3a97cfef652dc4040568651a5fa3b5134dc77d9be15634502/diff:/var/lib/docker/overlay2/8b3260dfd58a02b7d8159b6afc3c159bd7291cfd9a1775fbaf29fe4e76f69a20/diff:/var/lib/docker/overlay2/c753d6214069dd46cd0081b824f43c0df60ffdfb938b92aea554ae3a8b9c1508/diff:/var/lib/docker/overlay2/c680d335eb38a7a2abe5b76424c6c54ed385b95b20cc532277cbe73a6201605f/diff:/var/lib/d
ocker/overlay2/e327f1867e80da378eaa32aea670dfa17608a323df5e2ce2a2927ab71a89434c/diff:/var/lib/docker/overlay2/8f7fc611ecb811805ce16c049acc698a4bbf315767108827580fd068c114a49f/diff:/var/lib/docker/overlay2/b4c041c40097d3daa25f56d9b31ce6dc7ea86b72f56b8099925359624e4c835a/diff:/var/lib/docker/overlay2/c2d84c8c50a90b4570914536b5feb2c44fd914c8795ec8fa02c4d350f0c1819c/diff:/var/lib/docker/overlay2/9a19bfa03522606fcbb9edfacd46b8252ef3d62dbe752bae1df89bbd4917e6de/diff:/var/lib/docker/overlay2/e04503bad449f92373005711667d7f4e3506424d4e841f4c56a7a0bdce070017/diff:/var/lib/docker/overlay2/16dbe753ab78a508c4259387e8bff48c1eeb15f4496ba987c815a2f5cdc729af/diff:/var/lib/docker/overlay2/94ee89cad2ca3484068b8feb6abe9debee0032df6ec0a66d2ca913f5ac446cf6/diff:/var/lib/docker/overlay2/6895cf5a950b8ca218cf7963ff33158826561a3d0d5557f4e1ccb2ddef13524b/diff:/var/lib/docker/overlay2/248d8a94b8d3fb1543c9cc0c2538dda1c7cb836af69deebb22889c97f3f680bf/diff:/var/lib/docker/overlay2/9ebf525bfe9fa5b50bbda7834311a1794d6fcbb388ae8e417ad386b34be
f0821/diff:/var/lib/docker/overlay2/9ab0def267dbbc6372345e4b8b33b2d2a23092804849e2a55223ba546ea490fb/diff:/var/lib/docker/overlay2/484f5f631f05654cb13c756a72edbbdf9c0a050b7d3cfb9cacbe84422a291d68/diff:/var/lib/docker/overlay2/dd81b84a651c35f6c6bd454071c315e512b0b4902814f9dbe70575d32f975a3c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/33b49f5f50a463e947d633f69d0b37b5c89ada14b04bf060cd83351ad6c4fc78/merged",
	                "UpperDir": "/var/lib/docker/overlay2/33b49f5f50a463e947d633f69d0b37b5c89ada14b04bf060cd83351ad6c4fc78/diff",
	                "WorkDir": "/var/lib/docker/overlay2/33b49f5f50a463e947d633f69d0b37b5c89ada14b04bf060cd83351ad6c4fc78/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-251987",
	                "Source": "/var/lib/docker/volumes/running-upgrade-251987/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-251987",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-251987",
	                "name.minikube.sigs.k8s.io": "running-upgrade-251987",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b101fefef239665853838ada71a2c9d5721a183fe9bb56e37da78553f3f7ae0f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33289"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33288"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33287"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33286"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b101fefef239",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-251987": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.84"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b6f4b46abf01",
	                        "running-upgrade-251987"
	                    ],
	                    "NetworkID": "41bacb402902c9273e6f652d1ecb663da65423c515c0f9200df9316a34a74d0b",
	                    "EndpointID": "82f1d696600695b18fa3ec438f80db501b223debe206bda36f1db10549e9c1f2",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.84",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:54",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-251987 -n running-upgrade-251987
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-251987 -n running-upgrade-251987: exit status 4 (631.369628ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 20:35:15.244195  539468 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-251987" does not appear in /home/jenkins/minikube-integration/17885-409390/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-251987" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-251987" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-251987
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-251987: (3.38885191s)
--- FAIL: TestRunningBinaryUpgrade (81.16s)

                                                
                                    
x
+
TestMissingContainerUpgrade (184.65s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.286446423.exe start -p missing-upgrade-108038 --memory=2200 --driver=docker  --container-runtime=crio
E0103 20:29:11.513982  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.286446423.exe start -p missing-upgrade-108038 --memory=2200 --driver=docker  --container-runtime=crio: (2m17.147790149s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-108038
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-108038: (10.28853484s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-108038
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-108038 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0103 20:32:04.568593  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
version_upgrade_test.go:342: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-108038 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (33.213181379s)

                                                
                                                
-- stdout --
	* [missing-upgrade-108038] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-108038 in cluster missing-upgrade-108038
	* Pulling base image v0.0.42-1703498848-17857 ...
	* docker "missing-upgrade-108038" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 20:31:37.639184  525844 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:31:37.639398  525844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:31:37.639423  525844 out.go:309] Setting ErrFile to fd 2...
	I0103 20:31:37.639441  525844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:31:37.639745  525844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-409390/.minikube/bin
	I0103 20:31:37.640188  525844 out.go:303] Setting JSON to false
	I0103 20:31:37.641158  525844 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":8047,"bootTime":1704305851,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0103 20:31:37.641260  525844 start.go:138] virtualization:  
	I0103 20:31:37.645555  525844 out.go:177] * [missing-upgrade-108038] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0103 20:31:37.648084  525844 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 20:31:37.648151  525844 notify.go:220] Checking for updates...
	I0103 20:31:37.651179  525844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 20:31:37.653376  525844 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 20:31:37.655695  525844 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube
	I0103 20:31:37.657683  525844 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0103 20:31:37.659603  525844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 20:31:37.662122  525844 config.go:182] Loaded profile config "missing-upgrade-108038": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0103 20:31:37.664263  525844 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0103 20:31:37.666045  525844 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 20:31:37.689268  525844 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 20:31:37.689393  525844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 20:31:37.770554  525844 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-03 20:31:37.760322065 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 20:31:37.770691  525844 docker.go:295] overlay module found
	I0103 20:31:37.772387  525844 out.go:177] * Using the docker driver based on existing profile
	I0103 20:31:37.774071  525844 start.go:298] selected driver: docker
	I0103 20:31:37.774086  525844 start.go:902] validating driver "docker" against &{Name:missing-upgrade-108038 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-108038 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.7 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0103 20:31:37.774191  525844 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 20:31:37.774901  525844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 20:31:37.841196  525844 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-03 20:31:37.831363966 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 20:31:37.841605  525844 cni.go:84] Creating CNI manager for ""
	I0103 20:31:37.841624  525844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 20:31:37.841637  525844 start_flags.go:323] config:
	{Name:missing-upgrade-108038 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-108038 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.7 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0103 20:31:37.844412  525844 out.go:177] * Starting control plane node missing-upgrade-108038 in cluster missing-upgrade-108038
	I0103 20:31:37.846279  525844 cache.go:121] Beginning downloading kic base image for docker with crio
	I0103 20:31:37.848550  525844 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0103 20:31:37.850448  525844 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0103 20:31:37.850632  525844 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0103 20:31:37.869197  525844 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I0103 20:31:37.869385  525844 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I0103 20:31:37.869951  525844 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W0103 20:31:37.917506  525844 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0103 20:31:37.917679  525844 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/missing-upgrade-108038/config.json ...
	I0103 20:31:37.917762  525844 cache.go:107] acquiring lock: {Name:mk78d87b8ba8b51b681a7c0163fc10f10a5ff4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:31:37.917852  525844 cache.go:115] /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0103 20:31:37.917861  525844 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 107.2µs
	I0103 20:31:37.917873  525844 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0103 20:31:37.917883  525844 cache.go:107] acquiring lock: {Name:mka55c36f2c1ee731e00cdb772546de2b15db0fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:31:37.917970  525844 cache.go:107] acquiring lock: {Name:mk5848a21617272537681ae8c4b87f10f3fde221 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:31:37.918007  525844 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I0103 20:31:37.918117  525844 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I0103 20:31:37.918229  525844 cache.go:107] acquiring lock: {Name:mk50a92db3eea7005bb38171fdab6100907d689b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:31:37.918421  525844 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0103 20:31:37.918478  525844 cache.go:107] acquiring lock: {Name:mk2c4870517c3a4fedad9d87517a6219ab69b98f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:31:37.918596  525844 cache.go:107] acquiring lock: {Name:mk931f5c48fff6b3aa8eefdb1c5e4d81001db2c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:31:37.918699  525844 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I0103 20:31:37.918758  525844 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0103 20:31:37.919069  525844 cache.go:107] acquiring lock: {Name:mkde64ff2c3d0eb724a11f7867644cecbc61610f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:31:37.919394  525844 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0103 20:31:37.919710  525844 cache.go:107] acquiring lock: {Name:mkbde15802b62fcbea1b27c9517898b77c941f0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:31:37.919842  525844 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I0103 20:31:37.919857  525844 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0103 20:31:37.920894  525844 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0103 20:31:37.921465  525844 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0103 20:31:37.921644  525844 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0103 20:31:37.922011  525844 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I0103 20:31:37.922227  525844 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0103 20:31:37.922229  525844 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	W0103 20:31:38.374165  525844 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I0103 20:31:38.374220  525844 cache.go:162] opening:  /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	I0103 20:31:38.458876  525844 cache.go:162] opening:  /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	I0103 20:31:38.467641  525844 cache.go:162] opening:  /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0103 20:31:38.470494  525844 cache.go:162] opening:  /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	W0103 20:31:38.471166  525844 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I0103 20:31:38.471230  525844 cache.go:162] opening:  /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	W0103 20:31:38.482374  525844 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I0103 20:31:38.482468  525844 cache.go:162] opening:  /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	I0103 20:31:38.484550  525844 cache.go:162] opening:  /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?I0103 20:31:38.572216  525844 cache.go:157] /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0103 20:31:38.572238  525844 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 653.766062ms
	I0103 20:31:38.572251  525844 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  577.28 KiB / 287.99 MiB [] 0.20% ? p/s ?I0103 20:31:38.948151  525844 cache.go:157] /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0103 20:31:38.948227  525844 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 1.029651042s
	I0103 20:31:38.948255  525844 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0103 20:31:38.962588  525844 cache.go:157] /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0103 20:31:38.962660  525844 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 1.042954665s
	I0103 20:31:38.962687  525844 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  12.42 MiB / 287.99 MiB [>] 4.31% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.12 MiB I0103 20:31:39.296674  525844 cache.go:157] /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0103 20:31:39.296706  525844 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.378821769s
	I0103 20:31:39.296721  525844 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.12 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.12 MiB I0103 20:31:39.703414  525844 cache.go:157] /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0103 20:31:39.703440  525844 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 1.78521224s
	I0103 20:31:39.703454  525844 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.94 MiB / 287.99 MiB  9.01% 40.34 MiB     > gcr.io/k8s-minikube/kicbase...:  28.42 MiB / 287.99 MiB  9.87% 40.34 MiB     > gcr.io/k8s-minikube/kicbase...:  42.39 MiB / 287.99 MiB  14.72% 40.34 MiB    > gcr.io/k8s-minikube/kicbase...:  43.92 MiB / 287.99 MiB  15.25% 39.69 MiB    > gcr.io/k8s-minikube/kicbase...:  59.59 MiB / 287.99 MiB  20.69% 39.69 MiBI0103 20:31:40.656060  525844 cache.go:157] /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0103 20:31:40.656087  525844 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 2.738124839s
	I0103 20:31:40.656100  525844 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 39.69 MiB    > gcr.io/k8s-minikube/kicbase...:  72.76 MiB / 287.99 MiB  25.27% 40.23 MiB    > gcr.io/k8s-minikube/kicbase...:  80.00 MiB / 287.99 MiB  27.78% 40.23 MiB    > gcr.io/k8s-minikube/kicbase...:  85.03 MiB / 287.99 MiB  29.52% 40.23 MiB    > gcr.io/k8s-minikube/kicbase...:  94.96 MiB / 287.99 MiB  32.97% 40.02 MiB    > gcr.io/k8s-minikube/kicbase...:  106.74 MiB / 287.99 MiB  37.06% 40.02 Mi    > gcr.io/k8s-minikube/kicbase...:  117.90 MiB / 287.99 MiB  40.94% 40.02 Mi    > gcr.io/k8s-minikube/kicbase...:  130.15 MiB / 287.99 MiB  45.19% 41.22 MiI0103 20:31:42.191391  525844 cache.go:157] /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0103 20:31:42.191765  525844 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 4.272653303s
	I0103 20:31:42.191992  525844 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0103 20:31:42.192045  525844 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  139.79 MiB / 287.99 MiB  48.54% 41.22 Mi    > gcr.io/k8s-minikube/kicbase...:  150.76 MiB / 287.99 MiB  52.35% 41.22 Mi    > gcr.io/k8s-minikube/kicbase...:  159.43 MiB / 287.99 MiB  55.36% 41.70 Mi    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.63% 41.70 Mi    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.63% 41.70 Mi    > gcr.io/k8s-minikube/kicbase...:  183.76 MiB / 287.99 MiB  63.81% 41.64 Mi    > gcr.io/k8s-minikube/kicbase...:  197.21 MiB / 287.99 MiB  68.48% 41.64 Mi    > gcr.io/k8s-minikube/kicbase...:  209.68 MiB / 287.99 MiB  72.81% 41.64 Mi    > gcr.io/k8s-minikube/kicbase...:  209.68 MiB / 287.99 MiB  72.81% 41.74 Mi    > gcr.io/k8s-minikube/kicbase...:  220.72 MiB / 287.99 MiB  76.64% 41.74 Mi    > gcr.io/k8s-minikube/kicbase...:  236.36 MiB / 287.99 MiB  82.07% 41.74 Mi    > gcr.io/k8s-minikube/kicbase...:  238.14 MiB / 287.99 MiB  82.69% 42.10 Mi    > gcr.io/k8s-minikube/kicbase...:  244.26 MiB / 287.99 MiB  84.
82% 42.10 Mi    > gcr.io/k8s-minikube/kicbase...:  253.53 MiB / 287.99 MiB  88.03% 42.10 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB  92.03% 42.28 Mi    > gcr.io/k8s-minikube/kicbase...:  267.85 MiB / 287.99 MiB  93.01% 42.28 Mi    > gcr.io/k8s-minikube/kicbase...:  278.74 MiB / 287.99 MiB  96.79% 42.28 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 42.01 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 42.01 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 42.01 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 39.30 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 39.30 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 39.30 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 36.77 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 36.77 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB
100.00% 36.77 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 32.50 MI0103 20:31:47.433371  525844 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I0103 20:31:47.433382  525844 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I0103 20:31:47.651001  525844 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I0103 20:31:47.651038  525844 cache.go:194] Successfully downloaded all kic artifacts
	I0103 20:31:47.651097  525844 start.go:365] acquiring machines lock for missing-upgrade-108038: {Name:mkf4a51ff80359ad8c844500cd455be693b99ef8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:31:47.651164  525844 start.go:369] acquired machines lock for "missing-upgrade-108038" in 43.085µs
	I0103 20:31:47.651191  525844 start.go:96] Skipping create...Using existing machine configuration
	I0103 20:31:47.651203  525844 fix.go:54] fixHost starting: 
	I0103 20:31:47.651473  525844 cli_runner.go:164] Run: docker container inspect missing-upgrade-108038 --format={{.State.Status}}
	W0103 20:31:47.682488  525844 cli_runner.go:211] docker container inspect missing-upgrade-108038 --format={{.State.Status}} returned with exit code 1
	I0103 20:31:47.682563  525844 fix.go:102] recreateIfNeeded on missing-upgrade-108038: state= err=unknown state "missing-upgrade-108038": docker container inspect missing-upgrade-108038 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-108038
	I0103 20:31:47.682583  525844 fix.go:107] machineExists: false. err=machine does not exist
	I0103 20:31:47.690892  525844 out.go:177] * docker "missing-upgrade-108038" container is missing, will recreate.
	I0103 20:31:47.700353  525844 delete.go:124] DEMOLISHING missing-upgrade-108038 ...
	I0103 20:31:47.700471  525844 cli_runner.go:164] Run: docker container inspect missing-upgrade-108038 --format={{.State.Status}}
	W0103 20:31:47.719567  525844 cli_runner.go:211] docker container inspect missing-upgrade-108038 --format={{.State.Status}} returned with exit code 1
	W0103 20:31:47.719637  525844 stop.go:75] unable to get state: unknown state "missing-upgrade-108038": docker container inspect missing-upgrade-108038 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-108038
	I0103 20:31:47.719660  525844 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-108038": docker container inspect missing-upgrade-108038 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-108038
	I0103 20:31:47.720195  525844 cli_runner.go:164] Run: docker container inspect missing-upgrade-108038 --format={{.State.Status}}
	W0103 20:31:47.745169  525844 cli_runner.go:211] docker container inspect missing-upgrade-108038 --format={{.State.Status}} returned with exit code 1
	I0103 20:31:47.745236  525844 delete.go:82] Unable to get host status for missing-upgrade-108038, assuming it has already been deleted: state: unknown state "missing-upgrade-108038": docker container inspect missing-upgrade-108038 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-108038
	I0103 20:31:47.745319  525844 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-108038
	W0103 20:31:47.776036  525844 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-108038 returned with exit code 1
	I0103 20:31:47.776887  525844 kic.go:371] could not find the container missing-upgrade-108038 to remove it. will try anyways
	I0103 20:31:47.776970  525844 cli_runner.go:164] Run: docker container inspect missing-upgrade-108038 --format={{.State.Status}}
	W0103 20:31:47.804930  525844 cli_runner.go:211] docker container inspect missing-upgrade-108038 --format={{.State.Status}} returned with exit code 1
	W0103 20:31:47.805019  525844 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-108038": docker container inspect missing-upgrade-108038 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-108038
	I0103 20:31:47.805105  525844 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-108038 /bin/bash -c "sudo init 0"
	W0103 20:31:47.831734  525844 cli_runner.go:211] docker exec --privileged -t missing-upgrade-108038 /bin/bash -c "sudo init 0" returned with exit code 1
	I0103 20:31:47.831783  525844 oci.go:650] error shutdown missing-upgrade-108038: docker exec --privileged -t missing-upgrade-108038 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-108038
	I0103 20:31:48.832164  525844 cli_runner.go:164] Run: docker container inspect missing-upgrade-108038 --format={{.State.Status}}
	W0103 20:31:48.849574  525844 cli_runner.go:211] docker container inspect missing-upgrade-108038 --format={{.State.Status}} returned with exit code 1
	I0103 20:31:48.849644  525844 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-108038": docker container inspect missing-upgrade-108038 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-108038
	I0103 20:31:48.849654  525844 oci.go:664] temporary error: container missing-upgrade-108038 status is  but expect it to be exited
	I0103 20:31:48.849682  525844 retry.go:31] will retry after 261.189783ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-108038": docker container inspect missing-upgrade-108038 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-108038
	I0103 20:31:49.111070  525844 cli_runner.go:164] Run: docker container inspect missing-upgrade-108038 --format={{.State.Status}}
	W0103 20:31:49.132473  525844 cli_runner.go:211] docker container inspect missing-upgrade-108038 --format={{.State.Status}} returned with exit code 1
	I0103 20:31:49.132529  525844 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-108038": docker container inspect missing-upgrade-108038 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-108038
	I0103 20:31:49.132538  525844 oci.go:664] temporary error: container missing-upgrade-108038 status is  but expect it to be exited
	I0103 20:31:49.132578  525844 retry.go:31] will retry after 390.256476ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-108038": docker container inspect missing-upgrade-108038 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-108038
	I0103 20:31:49.523142  525844 cli_runner.go:164] Run: docker container inspect missing-upgrade-108038 --format={{.State.Status}}
	W0103 20:31:49.543395  525844 cli_runner.go:211] docker container inspect missing-upgrade-108038 --format={{.State.Status}} returned with exit code 1
	I0103 20:31:49.543458  525844 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-108038": docker container inspect missing-upgrade-108038 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-108038
	I0103 20:31:49.543474  525844 oci.go:664] temporary error: container missing-upgrade-108038 status is  but expect it to be exited
	I0103 20:31:49.543500  525844 retry.go:31] will retry after 685.250074ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-108038": docker container inspect missing-upgrade-108038 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-108038
	I0103 20:31:50.229381  525844 cli_runner.go:164] Run: docker container inspect missing-upgrade-108038 --format={{.State.Status}}
	W0103 20:31:50.277933  525844 cli_runner.go:211] docker container inspect missing-upgrade-108038 --format={{.State.Status}} returned with exit code 1
	I0103 20:31:50.278008  525844 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-108038": docker container inspect missing-upgrade-108038 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-108038
	I0103 20:31:50.278024  525844 oci.go:664] temporary error: container missing-upgrade-108038 status is  but expect it to be exited
	I0103 20:31:50.278051  525844 retry.go:31] will retry after 1.975567915s: couldn't verify container is exited. %v: unknown state "missing-upgrade-108038": docker container inspect missing-upgrade-108038 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-108038
	I0103 20:31:52.254660  525844 cli_runner.go:164] Run: docker container inspect missing-upgrade-108038 --format={{.State.Status}}
	W0103 20:31:52.274017  525844 cli_runner.go:211] docker container inspect missing-upgrade-108038 --format={{.State.Status}} returned with exit code 1
	I0103 20:31:52.274084  525844 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-108038": docker container inspect missing-upgrade-108038 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-108038
	I0103 20:31:52.274093  525844 oci.go:664] temporary error: container missing-upgrade-108038 status is  but expect it to be exited
	I0103 20:31:52.274126  525844 retry.go:31] will retry after 1.649117786s: couldn't verify container is exited. %v: unknown state "missing-upgrade-108038": docker container inspect missing-upgrade-108038 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-108038
	I0103 20:31:53.923441  525844 cli_runner.go:164] Run: docker container inspect missing-upgrade-108038 --format={{.State.Status}}
	W0103 20:31:53.950062  525844 cli_runner.go:211] docker container inspect missing-upgrade-108038 --format={{.State.Status}} returned with exit code 1
	I0103 20:31:53.950136  525844 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-108038": docker container inspect missing-upgrade-108038 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-108038
	I0103 20:31:53.950147  525844 oci.go:664] temporary error: container missing-upgrade-108038 status is  but expect it to be exited
	I0103 20:31:53.950178  525844 retry.go:31] will retry after 4.038053102s: couldn't verify container is exited. %v: unknown state "missing-upgrade-108038": docker container inspect missing-upgrade-108038 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-108038
	I0103 20:31:57.988490  525844 cli_runner.go:164] Run: docker container inspect missing-upgrade-108038 --format={{.State.Status}}
	W0103 20:31:58.006852  525844 cli_runner.go:211] docker container inspect missing-upgrade-108038 --format={{.State.Status}} returned with exit code 1
	I0103 20:31:58.006920  525844 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-108038": docker container inspect missing-upgrade-108038 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-108038
	I0103 20:31:58.006929  525844 oci.go:664] temporary error: container missing-upgrade-108038 status is  but expect it to be exited
	I0103 20:31:58.006957  525844 retry.go:31] will retry after 5.287011562s: couldn't verify container is exited. %v: unknown state "missing-upgrade-108038": docker container inspect missing-upgrade-108038 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-108038
	I0103 20:32:03.294282  525844 cli_runner.go:164] Run: docker container inspect missing-upgrade-108038 --format={{.State.Status}}
	W0103 20:32:03.312636  525844 cli_runner.go:211] docker container inspect missing-upgrade-108038 --format={{.State.Status}} returned with exit code 1
	I0103 20:32:03.312703  525844 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-108038": docker container inspect missing-upgrade-108038 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-108038
	I0103 20:32:03.312718  525844 oci.go:664] temporary error: container missing-upgrade-108038 status is  but expect it to be exited
	I0103 20:32:03.312752  525844 oci.go:88] couldn't shut down missing-upgrade-108038 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-108038": docker container inspect missing-upgrade-108038 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-108038
	 
	I0103 20:32:03.312813  525844 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-108038
	I0103 20:32:03.329606  525844 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-108038
	W0103 20:32:03.346769  525844 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-108038 returned with exit code 1
	I0103 20:32:03.346862  525844 cli_runner.go:164] Run: docker network inspect missing-upgrade-108038 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0103 20:32:03.364030  525844 cli_runner.go:164] Run: docker network rm missing-upgrade-108038
	I0103 20:32:03.472639  525844 fix.go:114] Sleeping 1 second for extra luck!
	I0103 20:32:04.472810  525844 start.go:125] createHost starting for "" (driver="docker")
	I0103 20:32:04.476352  525844 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0103 20:32:04.476514  525844 start.go:159] libmachine.API.Create for "missing-upgrade-108038" (driver="docker")
	I0103 20:32:04.476543  525844 client.go:168] LocalClient.Create starting
	I0103 20:32:04.476615  525844 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem
	I0103 20:32:04.476653  525844 main.go:141] libmachine: Decoding PEM data...
	I0103 20:32:04.476671  525844 main.go:141] libmachine: Parsing certificate...
	I0103 20:32:04.476727  525844 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem
	I0103 20:32:04.476757  525844 main.go:141] libmachine: Decoding PEM data...
	I0103 20:32:04.476777  525844 main.go:141] libmachine: Parsing certificate...
	I0103 20:32:04.477052  525844 cli_runner.go:164] Run: docker network inspect missing-upgrade-108038 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0103 20:32:04.494162  525844 cli_runner.go:211] docker network inspect missing-upgrade-108038 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0103 20:32:04.494239  525844 network_create.go:281] running [docker network inspect missing-upgrade-108038] to gather additional debugging logs...
	I0103 20:32:04.494267  525844 cli_runner.go:164] Run: docker network inspect missing-upgrade-108038
	W0103 20:32:04.516852  525844 cli_runner.go:211] docker network inspect missing-upgrade-108038 returned with exit code 1
	I0103 20:32:04.516884  525844 network_create.go:284] error running [docker network inspect missing-upgrade-108038]: docker network inspect missing-upgrade-108038: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-108038 not found
	I0103 20:32:04.516899  525844 network_create.go:286] output of [docker network inspect missing-upgrade-108038]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-108038 not found
	
	** /stderr **
	I0103 20:32:04.517004  525844 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0103 20:32:04.534179  525844 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e48a1c7f0405 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:af:08:39:14} reservation:<nil>}
	I0103 20:32:04.534601  525844 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5ad9a395bb96 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:d1:45:f6:7e} reservation:<nil>}
	I0103 20:32:04.534898  525844 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-30a62dbfff17 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:f0:05:59:59} reservation:<nil>}
	I0103 20:32:04.535309  525844 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40033762a0}
	I0103 20:32:04.535329  525844 network_create.go:124] attempt to create docker network missing-upgrade-108038 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0103 20:32:04.535387  525844 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-108038 missing-upgrade-108038
	I0103 20:32:04.613732  525844 network_create.go:108] docker network missing-upgrade-108038 192.168.76.0/24 created
	I0103 20:32:04.613767  525844 kic.go:121] calculated static IP "192.168.76.2" for the "missing-upgrade-108038" container
	I0103 20:32:04.613842  525844 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0103 20:32:04.631816  525844 cli_runner.go:164] Run: docker volume create missing-upgrade-108038 --label name.minikube.sigs.k8s.io=missing-upgrade-108038 --label created_by.minikube.sigs.k8s.io=true
	I0103 20:32:04.648306  525844 oci.go:103] Successfully created a docker volume missing-upgrade-108038
	I0103 20:32:04.648392  525844 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-108038-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-108038 --entrypoint /usr/bin/test -v missing-upgrade-108038:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I0103 20:32:05.289507  525844 oci.go:107] Successfully prepared a docker volume missing-upgrade-108038
	I0103 20:32:05.289541  525844 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W0103 20:32:05.289703  525844 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0103 20:32:05.289811  525844 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0103 20:32:05.357798  525844 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-108038 --name missing-upgrade-108038 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-108038 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-108038 --network missing-upgrade-108038 --ip 192.168.76.2 --volume missing-upgrade-108038:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I0103 20:32:05.724443  525844 cli_runner.go:164] Run: docker container inspect missing-upgrade-108038 --format={{.State.Running}}
	I0103 20:32:05.750496  525844 cli_runner.go:164] Run: docker container inspect missing-upgrade-108038 --format={{.State.Status}}
	I0103 20:32:05.776206  525844 cli_runner.go:164] Run: docker exec missing-upgrade-108038 stat /var/lib/dpkg/alternatives/iptables
	I0103 20:32:05.849053  525844 oci.go:144] the created container "missing-upgrade-108038" has a running status.
	I0103 20:32:05.849081  525844 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/missing-upgrade-108038/id_rsa...
	I0103 20:32:06.393109  525844 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17885-409390/.minikube/machines/missing-upgrade-108038/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0103 20:32:06.420021  525844 cli_runner.go:164] Run: docker container inspect missing-upgrade-108038 --format={{.State.Status}}
	I0103 20:32:06.443755  525844 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0103 20:32:06.443779  525844 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-108038 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0103 20:32:06.499109  525844 cli_runner.go:164] Run: docker container inspect missing-upgrade-108038 --format={{.State.Status}}
	I0103 20:32:06.526032  525844 machine.go:88] provisioning docker machine ...
	I0103 20:32:06.526065  525844 ubuntu.go:169] provisioning hostname "missing-upgrade-108038"
	I0103 20:32:06.526133  525844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-108038
	I0103 20:32:06.548429  525844 main.go:141] libmachine: Using SSH client type: native
	I0103 20:32:06.548877  525844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33277 <nil> <nil>}
	I0103 20:32:06.548899  525844 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-108038 && echo "missing-upgrade-108038" | sudo tee /etc/hostname
	I0103 20:32:06.712767  525844 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-108038
	
	I0103 20:32:06.712846  525844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-108038
	I0103 20:32:06.741570  525844 main.go:141] libmachine: Using SSH client type: native
	I0103 20:32:06.742050  525844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33277 <nil> <nil>}
	I0103 20:32:06.742079  525844 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-108038' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-108038/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-108038' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:32:06.888243  525844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:32:06.888276  525844 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17885-409390/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-409390/.minikube}
	I0103 20:32:06.888321  525844 ubuntu.go:177] setting up certificates
	I0103 20:32:06.888332  525844 provision.go:83] configureAuth start
	I0103 20:32:06.888412  525844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-108038
	I0103 20:32:06.915676  525844 provision.go:138] copyHostCerts
	I0103 20:32:06.915742  525844 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem, removing ...
	I0103 20:32:06.915751  525844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem
	I0103 20:32:06.915827  525844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem (1078 bytes)
	I0103 20:32:06.916007  525844 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem, removing ...
	I0103 20:32:06.916016  525844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem
	I0103 20:32:06.916053  525844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem (1123 bytes)
	I0103 20:32:06.916147  525844 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem, removing ...
	I0103 20:32:06.916152  525844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem
	I0103 20:32:06.916183  525844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem (1679 bytes)
	I0103 20:32:06.916227  525844 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-108038 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-108038]
	I0103 20:32:07.395784  525844 provision.go:172] copyRemoteCerts
	I0103 20:32:07.395863  525844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:32:07.395916  525844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-108038
	I0103 20:32:07.418819  525844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33277 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/missing-upgrade-108038/id_rsa Username:docker}
	I0103 20:32:07.519607  525844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:32:07.543538  525844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0103 20:32:07.566418  525844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 20:32:07.589198  525844 provision.go:86] duration metric: configureAuth took 700.845464ms
	I0103 20:32:07.589224  525844 ubuntu.go:193] setting minikube options for container-runtime
	I0103 20:32:07.589435  525844 config.go:182] Loaded profile config "missing-upgrade-108038": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0103 20:32:07.589540  525844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-108038
	I0103 20:32:07.607457  525844 main.go:141] libmachine: Using SSH client type: native
	I0103 20:32:07.607882  525844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33277 <nil> <nil>}
	I0103 20:32:07.607911  525844 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:32:08.024889  525844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:32:08.024911  525844 machine.go:91] provisioned docker machine in 1.498856813s
	I0103 20:32:08.024920  525844 client.go:171] LocalClient.Create took 3.548371231s
	I0103 20:32:08.024935  525844 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-108038" took 3.548424088s
	I0103 20:32:08.024942  525844 start.go:300] post-start starting for "missing-upgrade-108038" (driver="docker")
	I0103 20:32:08.024960  525844 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:32:08.025029  525844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:32:08.025079  525844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-108038
	I0103 20:32:08.046459  525844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33277 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/missing-upgrade-108038/id_rsa Username:docker}
	I0103 20:32:08.147901  525844 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:32:08.152225  525844 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0103 20:32:08.152253  525844 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0103 20:32:08.152265  525844 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0103 20:32:08.152272  525844 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0103 20:32:08.152282  525844 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-409390/.minikube/addons for local assets ...
	I0103 20:32:08.152347  525844 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-409390/.minikube/files for local assets ...
	I0103 20:32:08.152447  525844 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem -> 4147632.pem in /etc/ssl/certs
	I0103 20:32:08.152589  525844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:32:08.161325  525844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem --> /etc/ssl/certs/4147632.pem (1708 bytes)
	I0103 20:32:08.184268  525844 start.go:303] post-start completed in 159.30832ms
	I0103 20:32:08.184641  525844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-108038
	I0103 20:32:08.206764  525844 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/missing-upgrade-108038/config.json ...
	I0103 20:32:08.207066  525844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 20:32:08.207106  525844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-108038
	I0103 20:32:08.226020  525844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33277 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/missing-upgrade-108038/id_rsa Username:docker}
	I0103 20:32:08.325965  525844 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0103 20:32:08.331589  525844 start.go:128] duration metric: createHost completed in 3.858738622s
	I0103 20:32:08.331695  525844 cli_runner.go:164] Run: docker container inspect missing-upgrade-108038 --format={{.State.Status}}
	W0103 20:32:08.352464  525844 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 20:32:08.352495  525844 machine.go:88] provisioning docker machine ...
	I0103 20:32:08.352512  525844 ubuntu.go:169] provisioning hostname "missing-upgrade-108038"
	I0103 20:32:08.352583  525844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-108038
	I0103 20:32:08.370786  525844 main.go:141] libmachine: Using SSH client type: native
	I0103 20:32:08.371194  525844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33277 <nil> <nil>}
	I0103 20:32:08.371211  525844 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-108038 && echo "missing-upgrade-108038" | sudo tee /etc/hostname
	I0103 20:32:08.522302  525844 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-108038
	
	I0103 20:32:08.522394  525844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-108038
	I0103 20:32:08.541470  525844 main.go:141] libmachine: Using SSH client type: native
	I0103 20:32:08.541871  525844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33277 <nil> <nil>}
	I0103 20:32:08.541889  525844 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-108038' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-108038/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-108038' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:32:08.683633  525844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:32:08.683673  525844 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17885-409390/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-409390/.minikube}
	I0103 20:32:08.683689  525844 ubuntu.go:177] setting up certificates
	I0103 20:32:08.683698  525844 provision.go:83] configureAuth start
	I0103 20:32:08.683761  525844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-108038
	I0103 20:32:08.700902  525844 provision.go:138] copyHostCerts
	I0103 20:32:08.700969  525844 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem, removing ...
	I0103 20:32:08.700985  525844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem
	I0103 20:32:08.701061  525844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem (1078 bytes)
	I0103 20:32:08.701159  525844 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem, removing ...
	I0103 20:32:08.701169  525844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem
	I0103 20:32:08.701196  525844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem (1123 bytes)
	I0103 20:32:08.701255  525844 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem, removing ...
	I0103 20:32:08.701260  525844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem
	I0103 20:32:08.701285  525844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem (1679 bytes)
	I0103 20:32:08.701336  525844 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-108038 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-108038]
	I0103 20:32:09.015049  525844 provision.go:172] copyRemoteCerts
	I0103 20:32:09.015120  525844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:32:09.015164  525844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-108038
	I0103 20:32:09.034278  525844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33277 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/missing-upgrade-108038/id_rsa Username:docker}
	I0103 20:32:09.136395  525844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 20:32:09.160666  525844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:32:09.184154  525844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0103 20:32:09.207285  525844 provision.go:86] duration metric: configureAuth took 523.570327ms
	I0103 20:32:09.207314  525844 ubuntu.go:193] setting minikube options for container-runtime
	I0103 20:32:09.207527  525844 config.go:182] Loaded profile config "missing-upgrade-108038": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0103 20:32:09.207636  525844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-108038
	I0103 20:32:09.225963  525844 main.go:141] libmachine: Using SSH client type: native
	I0103 20:32:09.226364  525844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33277 <nil> <nil>}
	I0103 20:32:09.226384  525844 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:32:09.563600  525844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:32:09.563623  525844 machine.go:91] provisioned docker machine in 1.211120433s
	I0103 20:32:09.563639  525844 start.go:300] post-start starting for "missing-upgrade-108038" (driver="docker")
	I0103 20:32:09.563650  525844 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:32:09.563716  525844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:32:09.563764  525844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-108038
	I0103 20:32:09.584783  525844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33277 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/missing-upgrade-108038/id_rsa Username:docker}
	I0103 20:32:09.687690  525844 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:32:09.691750  525844 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0103 20:32:09.691780  525844 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0103 20:32:09.691792  525844 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0103 20:32:09.691798  525844 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0103 20:32:09.691807  525844 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-409390/.minikube/addons for local assets ...
	I0103 20:32:09.691864  525844 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-409390/.minikube/files for local assets ...
	I0103 20:32:09.691953  525844 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem -> 4147632.pem in /etc/ssl/certs
	I0103 20:32:09.692092  525844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:32:09.700833  525844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem --> /etc/ssl/certs/4147632.pem (1708 bytes)
	I0103 20:32:09.724280  525844 start.go:303] post-start completed in 160.625234ms
	I0103 20:32:09.724360  525844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 20:32:09.724407  525844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-108038
	I0103 20:32:09.742234  525844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33277 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/missing-upgrade-108038/id_rsa Username:docker}
	I0103 20:32:09.836362  525844 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0103 20:32:09.841847  525844 fix.go:56] fixHost completed within 22.190638163s
	I0103 20:32:09.841869  525844 start.go:83] releasing machines lock for "missing-upgrade-108038", held for 22.190690741s
	I0103 20:32:09.841938  525844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-108038
	I0103 20:32:09.860369  525844 ssh_runner.go:195] Run: cat /version.json
	I0103 20:32:09.860422  525844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-108038
	I0103 20:32:09.860662  525844 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:32:09.860716  525844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-108038
	I0103 20:32:09.883050  525844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33277 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/missing-upgrade-108038/id_rsa Username:docker}
	I0103 20:32:09.883480  525844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33277 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/missing-upgrade-108038/id_rsa Username:docker}
	W0103 20:32:10.127154  525844 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0103 20:32:10.127257  525844 ssh_runner.go:195] Run: systemctl --version
	I0103 20:32:10.133895  525844 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:32:10.239364  525844 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 20:32:10.245203  525844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:32:10.271029  525844 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0103 20:32:10.271150  525844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:32:10.308550  525844 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:32:10.308614  525844 start.go:475] detecting cgroup driver to use...
	I0103 20:32:10.308659  525844 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 20:32:10.308738  525844 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:32:10.334901  525844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:32:10.347421  525844 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:32:10.347502  525844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:32:10.359332  525844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:32:10.371817  525844 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0103 20:32:10.385078  525844 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0103 20:32:10.385172  525844 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:32:10.493634  525844 docker.go:219] disabling docker service ...
	I0103 20:32:10.493707  525844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:32:10.507084  525844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:32:10.519687  525844 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:32:10.623119  525844 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:32:10.729071  525844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:32:10.741797  525844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:32:10.760180  525844 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0103 20:32:10.760264  525844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:32:10.773750  525844 out.go:177] 
	W0103 20:32:10.775282  525844 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0103 20:32:10.775305  525844 out.go:239] * 
	* 
	W0103 20:32:10.776295  525844 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0103 20:32:10.779319  525844 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:344: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-108038 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:346: *** TestMissingContainerUpgrade FAILED at 2024-01-03 20:32:10.821988086 +0000 UTC m=+2397.470731422
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-108038
helpers_test.go:235: (dbg) docker inspect missing-upgrade-108038:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "877a1bb9a89b5d0e67bfaf5ed58dacf6a3546e43c64a85f739e5eacf27e04a75",
	        "Created": "2024-01-03T20:32:05.37443891Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 527083,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-03T20:32:05.714964929Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/877a1bb9a89b5d0e67bfaf5ed58dacf6a3546e43c64a85f739e5eacf27e04a75/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/877a1bb9a89b5d0e67bfaf5ed58dacf6a3546e43c64a85f739e5eacf27e04a75/hostname",
	        "HostsPath": "/var/lib/docker/containers/877a1bb9a89b5d0e67bfaf5ed58dacf6a3546e43c64a85f739e5eacf27e04a75/hosts",
	        "LogPath": "/var/lib/docker/containers/877a1bb9a89b5d0e67bfaf5ed58dacf6a3546e43c64a85f739e5eacf27e04a75/877a1bb9a89b5d0e67bfaf5ed58dacf6a3546e43c64a85f739e5eacf27e04a75-json.log",
	        "Name": "/missing-upgrade-108038",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-108038:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-108038",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c75d73be44769497f856b8a43392926d6c0f6402bbe0ee8913edb6710b9f72b9-init/diff:/var/lib/docker/overlay2/5c46a0427ca2181b5ac31b2de22e30ddfa257e8bc7be71ff0c2fcd0f54cebea3/diff:/var/lib/docker/overlay2/e1835a24a72a1978eb7cbb8323828a0fe8e48c243d56aca4e413f12ea48ea255/diff:/var/lib/docker/overlay2/409e5c5c6933761ba8d9f050bd7de867564521ee89d6a1bdaf5b2f62ae6e7158/diff:/var/lib/docker/overlay2/820604d4b926dd4f75e58277782c265dd8895ff5f633bcd94584758d08794e0a/diff:/var/lib/docker/overlay2/e19523e9de46639d6c665eec5919a033fe7c37c8dcaccb0455d61577cbf80a0e/diff:/var/lib/docker/overlay2/4c64dc42027edc862502bf42d025625034b2854396a8efd33ab6694ba05af955/diff:/var/lib/docker/overlay2/2ab2919c9604e2b3bf91cb64953912f3abd04a5349876668fb93668a352e524c/diff:/var/lib/docker/overlay2/8ed6eda7b5fc73a75bfc84a6e97dc39dcd861a595a3eebe20ad0fb040baf8f69/diff:/var/lib/docker/overlay2/5200182f2fa6d6c45db9dc7ba7e32c54e76a4517456151a5d2c25f4494d472ad/diff:/var/lib/docker/overlay2/589865
86f6bf7b334b759da2b98e6d00caa41b66d3a71dd2ea81da70ff3dc8ee/diff:/var/lib/docker/overlay2/67a395ccff492a628dd49ad0c897ac89df5ae8db0ec35983271040b67abdddb4/diff:/var/lib/docker/overlay2/52a6babc5c264a36ef7dd70bfe848c02f705030b01fd22384337a8d6aa808b70/diff:/var/lib/docker/overlay2/96330eb3a055ccde647fecf62841ccf9351b905211cc3aa275b80ff1d697e17c/diff:/var/lib/docker/overlay2/50f7a390e99f2434bdd55db4b84f6ed636ad7086ea16c5763b7bd02a20a4130d/diff:/var/lib/docker/overlay2/58105a736c8da0366743732cbc133c5e24b7a6579e0d77d833d2dcb0fd5f45d1/diff:/var/lib/docker/overlay2/e91a396297ce1686b01f535693f8e8c364df4637368716fc87af37a2f10fbff2/diff:/var/lib/docker/overlay2/77b168f3f6f2e5c3a97cfef652dc4040568651a5fa3b5134dc77d9be15634502/diff:/var/lib/docker/overlay2/8b3260dfd58a02b7d8159b6afc3c159bd7291cfd9a1775fbaf29fe4e76f69a20/diff:/var/lib/docker/overlay2/c753d6214069dd46cd0081b824f43c0df60ffdfb938b92aea554ae3a8b9c1508/diff:/var/lib/docker/overlay2/c680d335eb38a7a2abe5b76424c6c54ed385b95b20cc532277cbe73a6201605f/diff:/var/lib/d
ocker/overlay2/e327f1867e80da378eaa32aea670dfa17608a323df5e2ce2a2927ab71a89434c/diff:/var/lib/docker/overlay2/8f7fc611ecb811805ce16c049acc698a4bbf315767108827580fd068c114a49f/diff:/var/lib/docker/overlay2/b4c041c40097d3daa25f56d9b31ce6dc7ea86b72f56b8099925359624e4c835a/diff:/var/lib/docker/overlay2/c2d84c8c50a90b4570914536b5feb2c44fd914c8795ec8fa02c4d350f0c1819c/diff:/var/lib/docker/overlay2/9a19bfa03522606fcbb9edfacd46b8252ef3d62dbe752bae1df89bbd4917e6de/diff:/var/lib/docker/overlay2/e04503bad449f92373005711667d7f4e3506424d4e841f4c56a7a0bdce070017/diff:/var/lib/docker/overlay2/16dbe753ab78a508c4259387e8bff48c1eeb15f4496ba987c815a2f5cdc729af/diff:/var/lib/docker/overlay2/94ee89cad2ca3484068b8feb6abe9debee0032df6ec0a66d2ca913f5ac446cf6/diff:/var/lib/docker/overlay2/6895cf5a950b8ca218cf7963ff33158826561a3d0d5557f4e1ccb2ddef13524b/diff:/var/lib/docker/overlay2/248d8a94b8d3fb1543c9cc0c2538dda1c7cb836af69deebb22889c97f3f680bf/diff:/var/lib/docker/overlay2/9ebf525bfe9fa5b50bbda7834311a1794d6fcbb388ae8e417ad386b34be
f0821/diff:/var/lib/docker/overlay2/9ab0def267dbbc6372345e4b8b33b2d2a23092804849e2a55223ba546ea490fb/diff:/var/lib/docker/overlay2/484f5f631f05654cb13c756a72edbbdf9c0a050b7d3cfb9cacbe84422a291d68/diff:/var/lib/docker/overlay2/dd81b84a651c35f6c6bd454071c315e512b0b4902814f9dbe70575d32f975a3c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c75d73be44769497f856b8a43392926d6c0f6402bbe0ee8913edb6710b9f72b9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c75d73be44769497f856b8a43392926d6c0f6402bbe0ee8913edb6710b9f72b9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c75d73be44769497f856b8a43392926d6c0f6402bbe0ee8913edb6710b9f72b9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-108038",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-108038/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-108038",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-108038",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-108038",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cae32136bda70c082e8293224c2734f7e0c9c15475ebf73d89892f93d4c71b30",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33277"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33276"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33273"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33275"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33274"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cae32136bda7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-108038": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "877a1bb9a89b",
	                        "missing-upgrade-108038"
	                    ],
	                    "NetworkID": "a568a9eb69e79614603c5707dc99468b1775f1c374fec74b4f0b1e3b9b593d7a",
	                    "EndpointID": "65ee62661d6495274754067338af521cf4f8d37bdce8d7db01d30a47d6881d6d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-108038 -n missing-upgrade-108038
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-108038 -n missing-upgrade-108038: exit status 6 (340.211931ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 20:32:11.162931  528070 status.go:415] kubeconfig endpoint: got: 192.168.59.7:8443, want: 192.168.76.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-108038" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-108038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-108038
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-108038: (1.899773512s)
--- FAIL: TestMissingContainerUpgrade (184.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (99.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.1123560391.exe start -p stopped-upgrade-077088 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.17.0.1123560391.exe start -p stopped-upgrade-077088 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m12.897258949s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.17.0.1123560391.exe -p stopped-upgrade-077088 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.17.0.1123560391.exe -p stopped-upgrade-077088 stop: (20.228528241s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-077088 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-077088 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.847429687s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-077088] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-077088 in cluster stopped-upgrade-077088
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Restarting existing docker container for "stopped-upgrade-077088" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 20:33:47.636194  532630 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:33:47.636376  532630 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:33:47.636385  532630 out.go:309] Setting ErrFile to fd 2...
	I0103 20:33:47.636391  532630 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:33:47.636664  532630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-409390/.minikube/bin
	I0103 20:33:47.637048  532630 out.go:303] Setting JSON to false
	I0103 20:33:47.638050  532630 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":8177,"bootTime":1704305851,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0103 20:33:47.638130  532630 start.go:138] virtualization:  
	I0103 20:33:47.640934  532630 out.go:177] * [stopped-upgrade-077088] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0103 20:33:47.644051  532630 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 20:33:47.644170  532630 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0103 20:33:47.644205  532630 notify.go:220] Checking for updates...
	I0103 20:33:47.648996  532630 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 20:33:47.651098  532630 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 20:33:47.653811  532630 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube
	I0103 20:33:47.656093  532630 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0103 20:33:47.658467  532630 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 20:33:47.661230  532630 config.go:182] Loaded profile config "stopped-upgrade-077088": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0103 20:33:47.664044  532630 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0103 20:33:47.666387  532630 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 20:33:47.709125  532630 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 20:33:47.709231  532630 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 20:33:47.783597  532630 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I0103 20:33:47.808751  532630 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-03 20:33:47.798550713 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 20:33:47.808852  532630 docker.go:295] overlay module found
	I0103 20:33:47.812117  532630 out.go:177] * Using the docker driver based on existing profile
	I0103 20:33:47.814677  532630 start.go:298] selected driver: docker
	I0103 20:33:47.814699  532630 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-077088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-077088 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.33 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0103 20:33:47.814800  532630 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 20:33:47.815454  532630 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 20:33:47.881810  532630 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-03 20:33:47.87218938 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 20:33:47.882130  532630 cni.go:84] Creating CNI manager for ""
	I0103 20:33:47.882144  532630 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 20:33:47.882157  532630 start_flags.go:323] config:
	{Name:stopped-upgrade-077088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-077088 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.33 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0103 20:33:47.885388  532630 out.go:177] * Starting control plane node stopped-upgrade-077088 in cluster stopped-upgrade-077088
	I0103 20:33:47.887766  532630 cache.go:121] Beginning downloading kic base image for docker with crio
	I0103 20:33:47.890354  532630 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0103 20:33:47.892812  532630 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0103 20:33:47.892906  532630 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0103 20:33:47.911109  532630 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0103 20:33:47.911137  532630 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0103 20:33:47.984610  532630 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0103 20:33:47.984761  532630 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/stopped-upgrade-077088/config.json ...
	I0103 20:33:47.984874  532630 cache.go:107] acquiring lock: {Name:mk78d87b8ba8b51b681a7c0163fc10f10a5ff4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:33:47.984955  532630 cache.go:115] /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0103 20:33:47.984964  532630 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 96.803µs
	I0103 20:33:47.984973  532630 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0103 20:33:47.984983  532630 cache.go:107] acquiring lock: {Name:mka55c36f2c1ee731e00cdb772546de2b15db0fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:33:47.985011  532630 cache.go:115] /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0103 20:33:47.985014  532630 cache.go:194] Successfully downloaded all kic artifacts
	I0103 20:33:47.985018  532630 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 35.134µs
	I0103 20:33:47.985025  532630 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0103 20:33:47.985034  532630 cache.go:107] acquiring lock: {Name:mk50a92db3eea7005bb38171fdab6100907d689b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:33:47.985047  532630 start.go:365] acquiring machines lock for stopped-upgrade-077088: {Name:mkf7e1e3924093e554c6674511592cca55d1fae1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:33:47.985060  532630 cache.go:115] /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0103 20:33:47.985065  532630 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 32.278µs
	I0103 20:33:47.985072  532630 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0103 20:33:47.985080  532630 cache.go:107] acquiring lock: {Name:mk931f5c48fff6b3aa8eefdb1c5e4d81001db2c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:33:47.985088  532630 start.go:369] acquired machines lock for "stopped-upgrade-077088" in 26.149µs
	I0103 20:33:47.985101  532630 start.go:96] Skipping create...Using existing machine configuration
	I0103 20:33:47.985106  532630 fix.go:54] fixHost starting: 
	I0103 20:33:47.985114  532630 cache.go:107] acquiring lock: {Name:mk5848a21617272537681ae8c4b87f10f3fde221 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:33:47.985142  532630 cache.go:115] /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0103 20:33:47.985147  532630 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 33.583µs
	I0103 20:33:47.985153  532630 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0103 20:33:47.985161  532630 cache.go:107] acquiring lock: {Name:mk2c4870517c3a4fedad9d87517a6219ab69b98f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:33:47.985193  532630 cache.go:115] /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0103 20:33:47.985197  532630 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 37.169µs
	I0103 20:33:47.985204  532630 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0103 20:33:47.985211  532630 cache.go:107] acquiring lock: {Name:mkde64ff2c3d0eb724a11f7867644cecbc61610f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:33:47.985236  532630 cache.go:115] /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0103 20:33:47.985244  532630 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 29.768µs
	I0103 20:33:47.985250  532630 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0103 20:33:47.985259  532630 cache.go:107] acquiring lock: {Name:mkbde15802b62fcbea1b27c9517898b77c941f0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:33:47.985282  532630 cache.go:115] /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0103 20:33:47.985286  532630 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 28.258µs
	I0103 20:33:47.985292  532630 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0103 20:33:47.985107  532630 cache.go:115] /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0103 20:33:47.985377  532630 cli_runner.go:164] Run: docker container inspect stopped-upgrade-077088 --format={{.State.Status}}
	I0103 20:33:47.985383  532630 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 301.569µs
	I0103 20:33:47.985391  532630 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0103 20:33:47.985397  532630 cache.go:87] Successfully saved all images to host disk.
	I0103 20:33:48.012828  532630 fix.go:102] recreateIfNeeded on stopped-upgrade-077088: state=Stopped err=<nil>
	W0103 20:33:48.012868  532630 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 20:33:48.018200  532630 out.go:177] * Restarting existing docker container for "stopped-upgrade-077088" ...
	I0103 20:33:48.020816  532630 cli_runner.go:164] Run: docker start stopped-upgrade-077088
	I0103 20:33:48.424136  532630 cli_runner.go:164] Run: docker container inspect stopped-upgrade-077088 --format={{.State.Status}}
	I0103 20:33:48.450696  532630 kic.go:430] container "stopped-upgrade-077088" state is running.
	I0103 20:33:48.451213  532630 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-077088
	I0103 20:33:48.475060  532630 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/stopped-upgrade-077088/config.json ...
	I0103 20:33:48.475329  532630 machine.go:88] provisioning docker machine ...
	I0103 20:33:48.475343  532630 ubuntu.go:169] provisioning hostname "stopped-upgrade-077088"
	I0103 20:33:48.475393  532630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-077088
	I0103 20:33:48.498862  532630 main.go:141] libmachine: Using SSH client type: native
	I0103 20:33:48.499600  532630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33285 <nil> <nil>}
	I0103 20:33:48.499624  532630 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-077088 && echo "stopped-upgrade-077088" | sudo tee /etc/hostname
	I0103 20:33:48.500337  532630 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0103 20:33:51.654712  532630 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-077088
	
	I0103 20:33:51.654798  532630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-077088
	I0103 20:33:51.678174  532630 main.go:141] libmachine: Using SSH client type: native
	I0103 20:33:51.678631  532630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33285 <nil> <nil>}
	I0103 20:33:51.678657  532630 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-077088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-077088/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-077088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:33:51.820182  532630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:33:51.820218  532630 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17885-409390/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-409390/.minikube}
	I0103 20:33:51.820244  532630 ubuntu.go:177] setting up certificates
	I0103 20:33:51.820255  532630 provision.go:83] configureAuth start
	I0103 20:33:51.820318  532630 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-077088
	I0103 20:33:51.844224  532630 provision.go:138] copyHostCerts
	I0103 20:33:51.844310  532630 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem, removing ...
	I0103 20:33:51.844328  532630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem
	I0103 20:33:51.844406  532630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem (1123 bytes)
	I0103 20:33:51.844525  532630 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem, removing ...
	I0103 20:33:51.844536  532630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem
	I0103 20:33:51.844567  532630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem (1679 bytes)
	I0103 20:33:51.844636  532630 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem, removing ...
	I0103 20:33:51.844645  532630 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem
	I0103 20:33:51.844670  532630 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem (1078 bytes)
	I0103 20:33:51.845093  532630 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-077088 san=[192.168.59.33 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-077088]
	I0103 20:33:52.403533  532630 provision.go:172] copyRemoteCerts
	I0103 20:33:52.403607  532630 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:33:52.403652  532630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-077088
	I0103 20:33:52.429369  532630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33285 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/stopped-upgrade-077088/id_rsa Username:docker}
	I0103 20:33:52.532712  532630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:33:52.559394  532630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0103 20:33:52.588487  532630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 20:33:52.615289  532630 provision.go:86] duration metric: configureAuth took 795.001368ms
	I0103 20:33:52.615317  532630 ubuntu.go:193] setting minikube options for container-runtime
	I0103 20:33:52.615500  532630 config.go:182] Loaded profile config "stopped-upgrade-077088": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0103 20:33:52.615622  532630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-077088
	I0103 20:33:52.636935  532630 main.go:141] libmachine: Using SSH client type: native
	I0103 20:33:52.639762  532630 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33285 <nil> <nil>}
	I0103 20:33:52.639790  532630 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:33:53.139998  532630 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:33:53.140024  532630 machine.go:91] provisioned docker machine in 4.664685408s
	I0103 20:33:53.140036  532630 start.go:300] post-start starting for "stopped-upgrade-077088" (driver="docker")
	I0103 20:33:53.140047  532630 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:33:53.140111  532630 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:33:53.140167  532630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-077088
	I0103 20:33:53.178617  532630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33285 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/stopped-upgrade-077088/id_rsa Username:docker}
	I0103 20:33:53.292700  532630 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:33:53.296916  532630 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0103 20:33:53.296940  532630 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0103 20:33:53.296951  532630 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0103 20:33:53.296958  532630 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0103 20:33:53.296969  532630 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-409390/.minikube/addons for local assets ...
	I0103 20:33:53.297027  532630 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-409390/.minikube/files for local assets ...
	I0103 20:33:53.297114  532630 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem -> 4147632.pem in /etc/ssl/certs
	I0103 20:33:53.297220  532630 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:33:53.307075  532630 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem --> /etc/ssl/certs/4147632.pem (1708 bytes)
	I0103 20:33:53.331844  532630 start.go:303] post-start completed in 191.792768ms
	I0103 20:33:53.331923  532630 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 20:33:53.331972  532630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-077088
	I0103 20:33:53.352222  532630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33285 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/stopped-upgrade-077088/id_rsa Username:docker}
	I0103 20:33:53.448762  532630 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0103 20:33:53.454498  532630 fix.go:56] fixHost completed within 5.46938618s
	I0103 20:33:53.454543  532630 start.go:83] releasing machines lock for "stopped-upgrade-077088", held for 5.469444831s
	I0103 20:33:53.454611  532630 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-077088
	I0103 20:33:53.473689  532630 ssh_runner.go:195] Run: cat /version.json
	I0103 20:33:53.473746  532630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-077088
	I0103 20:33:53.473747  532630 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:33:53.473788  532630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-077088
	I0103 20:33:53.495333  532630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33285 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/stopped-upgrade-077088/id_rsa Username:docker}
	I0103 20:33:53.496553  532630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33285 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/stopped-upgrade-077088/id_rsa Username:docker}
	W0103 20:33:53.658682  532630 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0103 20:33:53.658815  532630 ssh_runner.go:195] Run: systemctl --version
	I0103 20:33:53.664100  532630 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:33:53.825775  532630 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 20:33:53.831435  532630 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:33:53.855762  532630 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0103 20:33:53.855847  532630 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:33:53.883611  532630 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0103 20:33:53.883635  532630 start.go:475] detecting cgroup driver to use...
	I0103 20:33:53.883671  532630 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 20:33:53.883727  532630 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:33:53.912556  532630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:33:53.924591  532630 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:33:53.924695  532630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:33:53.937191  532630 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:33:53.949049  532630 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0103 20:33:53.962784  532630 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0103 20:33:53.962851  532630 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:33:54.072316  532630 docker.go:219] disabling docker service ...
	I0103 20:33:54.072399  532630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:33:54.087353  532630 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:33:54.100624  532630 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:33:54.210615  532630 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:33:54.339692  532630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:33:54.354690  532630 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:33:54.380699  532630 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0103 20:33:54.380884  532630 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:33:54.402139  532630 out.go:177] 
	W0103 20:33:54.405186  532630 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0103 20:33:54.405276  532630 out.go:239] * 
	* 
	W0103 20:33:54.406330  532630 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0103 20:33:54.409641  532630 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-077088 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (99.98s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (54.8s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-589189 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0103 20:37:04.568575  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-589189 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (47.493149969s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-589189] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node pause-589189 in cluster pause-589189
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Updating the running docker "pause-589189" container ...
	* Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-589189" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 20:36:42.537082  545961 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:36:42.537308  545961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:36:42.537335  545961 out.go:309] Setting ErrFile to fd 2...
	I0103 20:36:42.537355  545961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:36:42.537688  545961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-409390/.minikube/bin
	I0103 20:36:42.538183  545961 out.go:303] Setting JSON to false
	I0103 20:36:42.539251  545961 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":8352,"bootTime":1704305851,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0103 20:36:42.539371  545961 start.go:138] virtualization:  
	I0103 20:36:42.543058  545961 out.go:177] * [pause-589189] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0103 20:36:42.544998  545961 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 20:36:42.546677  545961 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 20:36:42.545121  545961 notify.go:220] Checking for updates...
	I0103 20:36:42.548645  545961 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 20:36:42.550994  545961 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube
	I0103 20:36:42.552716  545961 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0103 20:36:42.554602  545961 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 20:36:42.557083  545961 config.go:182] Loaded profile config "pause-589189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:36:42.557638  545961 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 20:36:42.582716  545961 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 20:36:42.582825  545961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 20:36:42.665435  545961 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:55 SystemTime:2024-01-03 20:36:42.653216809 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 20:36:42.665538  545961 docker.go:295] overlay module found
	I0103 20:36:42.667470  545961 out.go:177] * Using the docker driver based on existing profile
	I0103 20:36:42.669054  545961 start.go:298] selected driver: docker
	I0103 20:36:42.669074  545961 start.go:902] validating driver "docker" against &{Name:pause-589189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-589189 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regist
ry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:36:42.669211  545961 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 20:36:42.669320  545961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 20:36:42.749269  545961 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:55 SystemTime:2024-01-03 20:36:42.731932968 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 20:36:42.749728  545961 cni.go:84] Creating CNI manager for ""
	I0103 20:36:42.749749  545961 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 20:36:42.749762  545961 start_flags.go:323] config:
	{Name:pause-589189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-589189 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false s
torage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:36:42.752905  545961 out.go:177] * Starting control plane node pause-589189 in cluster pause-589189
	I0103 20:36:42.754750  545961 cache.go:121] Beginning downloading kic base image for docker with crio
	I0103 20:36:42.756444  545961 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0103 20:36:42.758115  545961 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:36:42.758170  545961 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0103 20:36:42.758182  545961 cache.go:56] Caching tarball of preloaded images
	I0103 20:36:42.758218  545961 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 20:36:42.758273  545961 preload.go:174] Found /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0103 20:36:42.758283  545961 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0103 20:36:42.758419  545961 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/pause-589189/config.json ...
	I0103 20:36:42.776928  545961 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0103 20:36:42.776953  545961 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0103 20:36:42.776972  545961 cache.go:194] Successfully downloaded all kic artifacts
	I0103 20:36:42.777022  545961 start.go:365] acquiring machines lock for pause-589189: {Name:mk2161a41da1a7de623c0c950b62bff860fd2d99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:36:42.777086  545961 start.go:369] acquired machines lock for "pause-589189" in 42.641µs
	I0103 20:36:42.777106  545961 start.go:96] Skipping create...Using existing machine configuration
	I0103 20:36:42.777111  545961 fix.go:54] fixHost starting: 
	I0103 20:36:42.777393  545961 cli_runner.go:164] Run: docker container inspect pause-589189 --format={{.State.Status}}
	I0103 20:36:42.797432  545961 fix.go:102] recreateIfNeeded on pause-589189: state=Running err=<nil>
	W0103 20:36:42.797461  545961 fix.go:128] unexpected machine state, will restart: <nil>
	I0103 20:36:42.799498  545961 out.go:177] * Updating the running docker "pause-589189" container ...
	I0103 20:36:42.801679  545961 machine.go:88] provisioning docker machine ...
	I0103 20:36:42.801713  545961 ubuntu.go:169] provisioning hostname "pause-589189"
	I0103 20:36:42.801818  545961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-589189
	I0103 20:36:42.820500  545961 main.go:141] libmachine: Using SSH client type: native
	I0103 20:36:42.821006  545961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33294 <nil> <nil>}
	I0103 20:36:42.821028  545961 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-589189 && echo "pause-589189" | sudo tee /etc/hostname
	I0103 20:36:42.974248  545961 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-589189
	
	I0103 20:36:42.974345  545961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-589189
	I0103 20:36:42.992912  545961 main.go:141] libmachine: Using SSH client type: native
	I0103 20:36:42.993412  545961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33294 <nil> <nil>}
	I0103 20:36:42.993437  545961 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-589189' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-589189/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-589189' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:36:43.136845  545961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:36:43.136875  545961 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17885-409390/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-409390/.minikube}
	I0103 20:36:43.136893  545961 ubuntu.go:177] setting up certificates
	I0103 20:36:43.136903  545961 provision.go:83] configureAuth start
	I0103 20:36:43.136970  545961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-589189
	I0103 20:36:43.162178  545961 provision.go:138] copyHostCerts
	I0103 20:36:43.162261  545961 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem, removing ...
	I0103 20:36:43.162273  545961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem
	I0103 20:36:43.162354  545961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem (1078 bytes)
	I0103 20:36:43.162457  545961 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem, removing ...
	I0103 20:36:43.162467  545961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem
	I0103 20:36:43.162495  545961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem (1123 bytes)
	I0103 20:36:43.162615  545961 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem, removing ...
	I0103 20:36:43.162626  545961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem
	I0103 20:36:43.162660  545961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem (1679 bytes)
	I0103 20:36:43.162710  545961 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem org=jenkins.pause-589189 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube pause-589189]
	I0103 20:36:43.382306  545961 provision.go:172] copyRemoteCerts
	I0103 20:36:43.382391  545961 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:36:43.382446  545961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-589189
	I0103 20:36:43.403775  545961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33294 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/pause-589189/id_rsa Username:docker}
	I0103 20:36:43.506344  545961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0103 20:36:43.538013  545961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:36:43.568045  545961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0103 20:36:43.600865  545961 provision.go:86] duration metric: configureAuth took 463.94709ms
	I0103 20:36:43.600953  545961 ubuntu.go:193] setting minikube options for container-runtime
	I0103 20:36:43.601214  545961 config.go:182] Loaded profile config "pause-589189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:36:43.601384  545961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-589189
	I0103 20:36:43.623975  545961 main.go:141] libmachine: Using SSH client type: native
	I0103 20:36:43.624392  545961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33294 <nil> <nil>}
	I0103 20:36:43.624418  545961 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:36:49.124012  545961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:36:49.124050  545961 machine.go:91] provisioned docker machine in 6.322353106s
	I0103 20:36:49.124061  545961 start.go:300] post-start starting for "pause-589189" (driver="docker")
	I0103 20:36:49.124072  545961 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:36:49.124141  545961 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:36:49.124181  545961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-589189
	I0103 20:36:49.152200  545961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33294 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/pause-589189/id_rsa Username:docker}
	I0103 20:36:49.258826  545961 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:36:49.265821  545961 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0103 20:36:49.265921  545961 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0103 20:36:49.265954  545961 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0103 20:36:49.266003  545961 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0103 20:36:49.266028  545961 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-409390/.minikube/addons for local assets ...
	I0103 20:36:49.266142  545961 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-409390/.minikube/files for local assets ...
	I0103 20:36:49.266339  545961 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem -> 4147632.pem in /etc/ssl/certs
	I0103 20:36:49.266585  545961 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:36:49.283187  545961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem --> /etc/ssl/certs/4147632.pem (1708 bytes)
	I0103 20:36:49.328306  545961 start.go:303] post-start completed in 204.22821ms
	I0103 20:36:49.328480  545961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 20:36:49.328577  545961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-589189
	I0103 20:36:49.355969  545961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33294 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/pause-589189/id_rsa Username:docker}
	I0103 20:36:49.467383  545961 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0103 20:36:49.475764  545961 fix.go:56] fixHost completed within 6.698642396s
	I0103 20:36:49.475798  545961 start.go:83] releasing machines lock for "pause-589189", held for 6.698704024s
	I0103 20:36:49.475888  545961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-589189
	I0103 20:36:49.505916  545961 ssh_runner.go:195] Run: cat /version.json
	I0103 20:36:49.505981  545961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-589189
	I0103 20:36:49.506339  545961 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:36:49.506397  545961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-589189
	I0103 20:36:49.541580  545961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33294 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/pause-589189/id_rsa Username:docker}
	I0103 20:36:49.560395  545961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33294 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/pause-589189/id_rsa Username:docker}
	I0103 20:36:49.644472  545961 ssh_runner.go:195] Run: systemctl --version
	I0103 20:36:49.798312  545961 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:36:49.976140  545961 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 20:36:49.983004  545961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:36:49.996374  545961 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0103 20:36:49.996523  545961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:36:50.012187  545961 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0103 20:36:50.012626  545961 start.go:475] detecting cgroup driver to use...
	I0103 20:36:50.012708  545961 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0103 20:36:50.012808  545961 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:36:50.040502  545961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:36:50.060361  545961 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:36:50.060486  545961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:36:50.083341  545961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:36:50.103546  545961 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:36:50.288511  545961 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:36:50.449453  545961 docker.go:219] disabling docker service ...
	I0103 20:36:50.449581  545961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:36:50.468114  545961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:36:50.485036  545961 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:36:50.652320  545961 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:36:50.819897  545961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:36:50.836975  545961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:36:50.863287  545961 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 20:36:50.863404  545961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:36:50.876805  545961 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0103 20:36:50.876918  545961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:36:50.891040  545961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:36:50.905411  545961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:36:50.920204  545961 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:36:50.933021  545961 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:36:50.945311  545961 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:36:50.957401  545961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:36:51.195511  545961 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:36:51.893204  545961 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:36:51.893325  545961 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:36:51.898151  545961 start.go:543] Will wait 60s for crictl version
	I0103 20:36:51.898223  545961 ssh_runner.go:195] Run: which crictl
	I0103 20:36:51.903857  545961 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:36:51.954886  545961 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0103 20:36:51.955012  545961 ssh_runner.go:195] Run: crio --version
	I0103 20:36:52.018102  545961 ssh_runner.go:195] Run: crio --version
	I0103 20:36:52.071074  545961 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0103 20:36:52.073630  545961 cli_runner.go:164] Run: docker network inspect pause-589189 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0103 20:36:52.091927  545961 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0103 20:36:52.096883  545961 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:36:52.096955  545961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:36:52.180276  545961 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 20:36:52.180302  545961 crio.go:415] Images already preloaded, skipping extraction
	I0103 20:36:52.180363  545961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:36:52.250421  545961 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 20:36:52.250448  545961 cache_images.go:84] Images are preloaded, skipping loading
	I0103 20:36:52.250534  545961 ssh_runner.go:195] Run: crio config
	I0103 20:36:52.325029  545961 cni.go:84] Creating CNI manager for ""
	I0103 20:36:52.325047  545961 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 20:36:52.325077  545961 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 20:36:52.325096  545961 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-589189 NodeName:pause-589189 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 20:36:52.325228  545961 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-589189"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:36:52.325302  545961 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-589189 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:pause-589189 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 20:36:52.325368  545961 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 20:36:52.336716  545961 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:36:52.336831  545961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:36:52.350263  545961 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I0103 20:36:52.374692  545961 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 20:36:52.397015  545961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0103 20:36:52.419697  545961 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0103 20:36:52.424516  545961 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/pause-589189 for IP: 192.168.76.2
	I0103 20:36:52.424551  545961 certs.go:190] acquiring lock for shared ca certs: {Name:mk7a87d13d39d2defe5d349d371b78fa1f1e95bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:36:52.424745  545961 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.key
	I0103 20:36:52.424810  545961 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.key
	I0103 20:36:52.424911  545961 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/pause-589189/client.key
	I0103 20:36:52.424982  545961 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/pause-589189/apiserver.key.31bdca25
	I0103 20:36:52.425027  545961 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/pause-589189/proxy-client.key
	I0103 20:36:52.425156  545961 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/414763.pem (1338 bytes)
	W0103 20:36:52.425188  545961 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/414763_empty.pem, impossibly tiny 0 bytes
	I0103 20:36:52.425200  545961 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem (1679 bytes)
	I0103 20:36:52.425228  545961 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:36:52.425259  545961 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:36:52.425297  545961 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem (1679 bytes)
	I0103 20:36:52.425349  545961 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem (1708 bytes)
	I0103 20:36:52.426073  545961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/pause-589189/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:36:52.455787  545961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/pause-589189/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 20:36:52.499845  545961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/pause-589189/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:36:52.534164  545961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/pause-589189/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0103 20:36:52.565444  545961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:36:52.595718  545961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:36:52.625448  545961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:36:52.654239  545961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:36:52.684137  545961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem --> /usr/share/ca-certificates/4147632.pem (1708 bytes)
	I0103 20:36:52.718881  545961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:36:52.749962  545961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/certs/414763.pem --> /usr/share/ca-certificates/414763.pem (1338 bytes)
	I0103 20:36:52.779276  545961 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:36:52.800769  545961 ssh_runner.go:195] Run: openssl version
	I0103 20:36:52.808050  545961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/414763.pem && ln -fs /usr/share/ca-certificates/414763.pem /etc/ssl/certs/414763.pem"
	I0103 20:36:52.819944  545961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/414763.pem
	I0103 20:36:52.824796  545961 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 20:01 /usr/share/ca-certificates/414763.pem
	I0103 20:36:52.824859  545961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/414763.pem
	I0103 20:36:52.833419  545961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/414763.pem /etc/ssl/certs/51391683.0"
	I0103 20:36:52.844562  545961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4147632.pem && ln -fs /usr/share/ca-certificates/4147632.pem /etc/ssl/certs/4147632.pem"
	I0103 20:36:52.856354  545961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4147632.pem
	I0103 20:36:52.861281  545961 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 20:01 /usr/share/ca-certificates/4147632.pem
	I0103 20:36:52.861350  545961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4147632.pem
	I0103 20:36:52.870000  545961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4147632.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:36:52.881066  545961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:36:52.893393  545961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:36:52.898119  545961 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:36:52.898302  545961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:36:52.907026  545961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:36:52.918175  545961 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:36:52.923023  545961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0103 20:36:52.931793  545961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0103 20:36:52.940504  545961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0103 20:36:52.949154  545961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0103 20:36:52.957866  545961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0103 20:36:52.966655  545961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0103 20:36:52.975552  545961 kubeadm.go:404] StartCluster: {Name:pause-589189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-589189 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-al
iases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:36:52.975720  545961 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:36:52.975793  545961 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:36:53.020568  545961 cri.go:89] found id: "fe332dccccd3cfdf9eb22fda618573d4ce50f4b9a7d78741ca394f0248e4a1bb"
	I0103 20:36:53.020596  545961 cri.go:89] found id: "d22ad169c567e2011ce0d9196523e4a33db859f3ebadfc4f50c24d4372357ae0"
	I0103 20:36:53.020603  545961 cri.go:89] found id: "85eb04634a23fe2f0cabe61dd973cbc0b3efaf531040055e52ee92f023d47bfe"
	I0103 20:36:53.020607  545961 cri.go:89] found id: "78f150ce8a96bcfa7c3d36b4aab3838e2634c219621756b6d4ce3911e540b5a8"
	I0103 20:36:53.020612  545961 cri.go:89] found id: "48229eac57ee1c0f3d9f6bc28113a1e66b5ba7e3490f1f786830c924316737c5"
	I0103 20:36:53.020624  545961 cri.go:89] found id: "0b4de2b22c7b4724729f6e7fb8c0e390ff80431329c90ae0cc8d0a069bcdfb8d"
	I0103 20:36:53.020630  545961 cri.go:89] found id: "073d47a61187f01ab01df5981bf363672c9b8fe4f93af6e737cdcbb14709924b"
	I0103 20:36:53.020634  545961 cri.go:89] found id: ""
	I0103 20:36:53.020699  545961 ssh_runner.go:195] Run: sudo runc list -f json
	I0103 20:36:53.046736  545961 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"073d47a61187f01ab01df5981bf363672c9b8fe4f93af6e737cdcbb14709924b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/073d47a61187f01ab01df5981bf363672c9b8fe4f93af6e737cdcbb14709924b/userdata","rootfs":"/var/lib/containers/storage/overlay/d54fac617ae0277447272bdf08bc180e54081dc77f96ae5feb950b9669c80b08/merged","created":"2024-01-03T20:35:47.256358786Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"f45930a3","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"f45930a3\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePo
licy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"073d47a61187f01ab01df5981bf363672c9b8fe4f93af6e737cdcbb14709924b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-01-03T20:35:47.057835757Z","io.kubernetes.cri-o.Image":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri-o.ImageRef":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-589189\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"509063750c2e63cf6a08ed0483b77f1c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-589189_509063750c2e63cf6a08ed0483b77f1c/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d54fac61
7ae0277447272bdf08bc180e54081dc77f96ae5feb950b9669c80b08/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-589189_kube-system_509063750c2e63cf6a08ed0483b77f1c_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f2b5abd3abda20e3d833723fcc28e5299f016a6b3b140e1ca88d30908a967d23/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f2b5abd3abda20e3d833723fcc28e5299f016a6b3b140e1ca88d30908a967d23","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-589189_kube-system_509063750c2e63cf6a08ed0483b77f1c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/509063750c2e63cf6a08ed0483b77f1c/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/509063750c2e63cf6a08ed0483b77f1c/cont
ainers/etcd/dabcac49\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-589189","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"509063750c2e63cf6a08ed0483b77f1c","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.76.2:2379","kubernetes.io/config.hash":"509063750c2e63cf6a08ed0483b77f1c","kubernetes.io/config.seen":"2024-01-03T20:35:46.473320242Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0b4de2b22c7b4724729f6e7fb8c0e390ff80431329c90ae0cc8d0a069bcdfb8d","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay
-containers/0b4de2b22c7b4724729f6e7fb8c0e390ff80431329c90ae0cc8d0a069bcdfb8d/userdata","rootfs":"/var/lib/containers/storage/overlay/b01f2365fcc4585b3ff242fd9e5b06ee11f71a35bdc14e62bb767ea602fee819/merged","created":"2024-01-03T20:36:51.231286424Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e1639c7a","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e1639c7a\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0b4de2b22c7b4724729f6e7fb8c0e390ff80431329c90ae0cc8d0a069bcdfb8d","io.kubernetes.cri-o.ContainerType":"contain
er","io.kubernetes.cri-o.Created":"2024-01-03T20:36:51.161018304Z","io.kubernetes.cri-o.Image":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.28.4","io.kubernetes.cri-o.ImageRef":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-589189\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c7efc0b224419bf12ab93741ef42c026\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-589189_c7efc0b224419bf12ab93741ef42c026/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b01f2365fcc4585b3ff242fd9e5b06ee11f71a35bdc14e62bb767ea602fee819/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-589189_k
ube-system_c7efc0b224419bf12ab93741ef42c026_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/8591909c742a1b3c7402d5dbd76abe5677f123cda77a60158d6a8391e8ecc042/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8591909c742a1b3c7402d5dbd76abe5677f123cda77a60158d6a8391e8ecc042","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-589189_kube-system_c7efc0b224419bf12ab93741ef42c026_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c7efc0b224419bf12ab93741ef42c026/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c7efc0b224419bf12ab93741ef42c026/containers/kube-scheduler/ac1617a1\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_p
ath\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-589189","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c7efc0b224419bf12ab93741ef42c026","kubernetes.io/config.hash":"c7efc0b224419bf12ab93741ef42c026","kubernetes.io/config.seen":"2024-01-03T20:35:46.473328193Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"48229eac57ee1c0f3d9f6bc28113a1e66b5ba7e3490f1f786830c924316737c5","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/48229eac57ee1c0f3d9f6bc28113a1e66b5ba7e3490f1f786830c924316737c5/userdata","rootfs":"/var/lib/containers/storage/overlay/c440c09f6aad08aa9ba0d690457a6361d751f35bfedb0f462767e3f3b4a1622c/merged","created":"2024-01-03T20:36:51.497907272Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container
.hash":"b60ddd3e","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b60ddd3e\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"48229eac57ee1c0f3d9f6bc28113a1e66b5ba7e3490f1f786830c924316737c5","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-01-03T20:36:51.251546542Z","io.kubernetes.cri-o.Image":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.28.4","io.kubernetes.cri-o.ImageRef":"9961cbceaf234d59b7dcf8a197a024f3e3
ce4b7fe2b67c2378efd3d209ca994b","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-589189\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c8685a37b3c1a0a54c11571df8b85a37\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-589189_c8685a37b3c1a0a54c11571df8b85a37/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c440c09f6aad08aa9ba0d690457a6361d751f35bfedb0f462767e3f3b4a1622c/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-589189_kube-system_c8685a37b3c1a0a54c11571df8b85a37_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/70f8e9f1783843accb2246408bcbf63727faf5d31577cc161f2954dd3dcfb4ae/userdata/resolv.conf","io.kubernetes.cri-o.Sandbo
xID":"70f8e9f1783843accb2246408bcbf63727faf5d31577cc161f2954dd3dcfb4ae","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-589189_kube-system_c8685a37b3c1a0a54c11571df8b85a37_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c8685a37b3c1a0a54c11571df8b85a37/containers/kube-controller-manager/89864557\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c8685a37b3c1a0a54c11571df8b85a37/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":tr
ue,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-589189","io.kubernetes.pod.namespace":"k
ube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c8685a37b3c1a0a54c11571df8b85a37","kubernetes.io/config.hash":"c8685a37b3c1a0a54c11571df8b85a37","kubernetes.io/config.seen":"2024-01-03T20:35:46.473327077Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"78f150ce8a96bcfa7c3d36b4aab3838e2634c219621756b6d4ce3911e540b5a8","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/78f150ce8a96bcfa7c3d36b4aab3838e2634c219621756b6d4ce3911e540b5a8/userdata","rootfs":"/var/lib/containers/storage/overlay/b9bb6a880bd3f946a564c1fd9a9056d4669ef246f9a02fc185e15434fb6f8bd5/merged","created":"2024-01-03T20:36:51.533425892Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"53a0a7f2","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.ku
bernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"53a0a7f2\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"78f150ce8a96bcfa7c3d36b4aab3838e2634c219621756b6d4ce3911e540b5a8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-01-03T20:36:51.327063676Z","io.kubernetes.cri-o.Image":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri-o.ImageRef":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-xh476\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"16420a9b-d68e-4a16-84d7
-e6344f3b9f27\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-xh476_16420a9b-d68e-4a16-84d7-e6344f3b9f27/kindnet-cni/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b9bb6a880bd3f946a564c1fd9a9056d4669ef246f9a02fc185e15434fb6f8bd5/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-xh476_kube-system_16420a9b-d68e-4a16-84d7-e6344f3b9f27_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/14ee1f0f3f9368277ea49cbedd43096b3138b778b533d2940e65d4dde11de2f1/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"14ee1f0f3f9368277ea49cbedd43096b3138b778b533d2940e65d4dde11de2f1","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-xh476_kube-system_16420a9b-d68e-4a16-84d7-e6344f3b9f27_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":
"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/16420a9b-d68e-4a16-84d7-e6344f3b9f27/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/16420a9b-d68e-4a16-84d7-e6344f3b9f27/containers/kindnet-cni/76cb6045\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/16420a9b-d68e-4a16-84d7-e6344f3b9f27/volumes/kubernetes.io~projected/kube-api-access-s5rgx\",\"readon
ly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-xh476","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"16420a9b-d68e-4a16-84d7-e6344f3b9f27","kubernetes.io/config.seen":"2024-01-03T20:36:08.173497705Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"85eb04634a23fe2f0cabe61dd973cbc0b3efaf531040055e52ee92f023d47bfe","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/85eb04634a23fe2f0cabe61dd973cbc0b3efaf531040055e52ee92f023d47bfe/userdata","rootfs":"/var/lib/containers/storage/overlay/0bff73e6cecb5b2067c8be92c390bd6fad7224c873bec7c032b96a11e5f64822/merged","created":"2024-01-03T20:36:51.536971916Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"21ee4fc9","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-
log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"21ee4fc9\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"85eb04634a23fe2f0cabe61dd973cbc0b3efaf531040055e52ee92f023d47bfe","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-01-03T20:36:51.351139944Z","io.kubernetes.cri-o.Image":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.28.4","io.kubernetes.cri-o.ImageRef":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-qptr2\",\"io.kubernetes.pod.namespace\":\"kube-
system\",\"io.kubernetes.pod.uid\":\"a55774e9-f310-4e29-be2d-81f71022a59b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-qptr2_a55774e9-f310-4e29-be2d-81f71022a59b/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0bff73e6cecb5b2067c8be92c390bd6fad7224c873bec7c032b96a11e5f64822/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-qptr2_kube-system_a55774e9-f310-4e29-be2d-81f71022a59b_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2502534472b45cfc8467d7909c5fc2b7ea2fda234d7ab31cbad29a69a74d9e1d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2502534472b45cfc8467d7909c5fc2b7ea2fda234d7ab31cbad29a69a74d9e1d","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-qptr2_kube-system_a55774e9-f310-4e29-be2d-81f71022a59b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false
","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a55774e9-f310-4e29-be2d-81f71022a59b/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a55774e9-f310-4e29-be2d-81f71022a59b/containers/kube-proxy/e73d2ce9\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/a55774e9-f310-4e29-be2d-81f71022a59b/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/
serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/a55774e9-f310-4e29-be2d-81f71022a59b/volumes/kubernetes.io~projected/kube-api-access-r7dqx\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-qptr2","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a55774e9-f310-4e29-be2d-81f71022a59b","kubernetes.io/config.seen":"2024-01-03T20:36:08.132625383Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d22ad169c567e2011ce0d9196523e4a33db859f3ebadfc4f50c24d4372357ae0","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/d22ad169c567e2011ce0d9196523e4a33db859f3ebadfc4f50c24d4372357ae0/userdata","rootfs":"/var/lib/containers/storage/overlay/df076d67d4fd2bde3e504152acc2eecb61a661cc8d4d6612ee9c3103fa0acda6/merged","created":"2024-01-03T20:36:51.531815889Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"8d500
d9b","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"8d500d9b\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.
pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d22ad169c567e2011ce0d9196523e4a33db859f3ebadfc4f50c24d4372357ae0","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-01-03T20:36:51.39115047Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri-o.ImageRef":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5dd5756b68-q766p\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"88099227-8f36-44d9-b01c-1d8a5fca054a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5dd5756b68-q766p_88099227-8f36-44d9-b01c-1d8a5fca054a/coredns/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\",\"attempt\":1}","io.kubernetes.
cri-o.MountPoint":"/var/lib/containers/storage/overlay/df076d67d4fd2bde3e504152acc2eecb61a661cc8d4d6612ee9c3103fa0acda6/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5dd5756b68-q766p_kube-system_88099227-8f36-44d9-b01c-1d8a5fca054a_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/959adb8df96711128ac2f95719313a3663564c9dde30a4e9e60a006ae7c618d5/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"959adb8df96711128ac2f95719313a3663564c9dde30a4e9e60a006ae7c618d5","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5dd5756b68-q766p_kube-system_88099227-8f36-44d9-b01c-1d8a5fca054a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/88099227-8f36-44d9-b01c-1d8a5fca054a/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\
":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/88099227-8f36-44d9-b01c-1d8a5fca054a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/88099227-8f36-44d9-b01c-1d8a5fca054a/containers/coredns/e9efd089\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/88099227-8f36-44d9-b01c-1d8a5fca054a/volumes/kubernetes.io~projected/kube-api-access-ftssm\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5dd5756b68-q766p","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"88099227-8f36-44d9-b01c-1d8a5fca054a","kubernetes.io/config.seen":"2024-01-03T20:36:40.172142757Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","i
d":"fe332dccccd3cfdf9eb22fda618573d4ce50f4b9a7d78741ca394f0248e4a1bb","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/fe332dccccd3cfdf9eb22fda618573d4ce50f4b9a7d78741ca394f0248e4a1bb/userdata","rootfs":"/var/lib/containers/storage/overlay/8b625fcf8bcc4556bc35e38746ac758b70183066837d58d9b472a8011dc6e72c/merged","created":"2024-01-03T20:36:51.527530626Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c95a9554","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c95a9554\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kub
ernetes.cri-o.ContainerID":"fe332dccccd3cfdf9eb22fda618573d4ce50f4b9a7d78741ca394f0248e4a1bb","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-01-03T20:36:51.404004281Z","io.kubernetes.cri-o.Image":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.4","io.kubernetes.cri-o.ImageRef":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-589189\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"fa472bc019626d20b5c1268d724294cd\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-589189_fa472bc019626d20b5c1268d724294cd/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8b625fcf8bc
c4556bc35e38746ac758b70183066837d58d9b472a8011dc6e72c/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-589189_kube-system_fa472bc019626d20b5c1268d724294cd_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/b8e42777c245aeb46734f67be170dfa05712c8d1653fba7ddc84b9e99a6a5e5f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b8e42777c245aeb46734f67be170dfa05712c8d1653fba7ddc84b9e99a6a5e5f","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-589189_kube-system_fa472bc019626d20b5c1268d724294cd_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/fa472bc019626d20b5c1268d724294cd/containers/kube-apiserver/1843f7f8\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\"
:\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/fa472bc019626d20b5c1268d724294cd/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-589189","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGrac
ePeriod":"30","io.kubernetes.pod.uid":"fa472bc019626d20b5c1268d724294cd","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.76.2:8443","kubernetes.io/config.hash":"fa472bc019626d20b5c1268d724294cd","kubernetes.io/config.seen":"2024-01-03T20:35:46.473325797Z","kubernetes.io/config.source":"file"},"owner":"root"}]
	I0103 20:36:53.047445  545961 cri.go:126] list returned 7 containers
	I0103 20:36:53.047489  545961 cri.go:129] container: {ID:073d47a61187f01ab01df5981bf363672c9b8fe4f93af6e737cdcbb14709924b Status:stopped}
	I0103 20:36:53.047526  545961 cri.go:135] skipping {073d47a61187f01ab01df5981bf363672c9b8fe4f93af6e737cdcbb14709924b stopped}: state = "stopped", want "paused"
	I0103 20:36:53.047550  545961 cri.go:129] container: {ID:0b4de2b22c7b4724729f6e7fb8c0e390ff80431329c90ae0cc8d0a069bcdfb8d Status:stopped}
	I0103 20:36:53.047585  545961 cri.go:135] skipping {0b4de2b22c7b4724729f6e7fb8c0e390ff80431329c90ae0cc8d0a069bcdfb8d stopped}: state = "stopped", want "paused"
	I0103 20:36:53.047604  545961 cri.go:129] container: {ID:48229eac57ee1c0f3d9f6bc28113a1e66b5ba7e3490f1f786830c924316737c5 Status:stopped}
	I0103 20:36:53.047624  545961 cri.go:135] skipping {48229eac57ee1c0f3d9f6bc28113a1e66b5ba7e3490f1f786830c924316737c5 stopped}: state = "stopped", want "paused"
	I0103 20:36:53.047643  545961 cri.go:129] container: {ID:78f150ce8a96bcfa7c3d36b4aab3838e2634c219621756b6d4ce3911e540b5a8 Status:stopped}
	I0103 20:36:53.047670  545961 cri.go:135] skipping {78f150ce8a96bcfa7c3d36b4aab3838e2634c219621756b6d4ce3911e540b5a8 stopped}: state = "stopped", want "paused"
	I0103 20:36:53.047702  545961 cri.go:129] container: {ID:85eb04634a23fe2f0cabe61dd973cbc0b3efaf531040055e52ee92f023d47bfe Status:stopped}
	I0103 20:36:53.047723  545961 cri.go:135] skipping {85eb04634a23fe2f0cabe61dd973cbc0b3efaf531040055e52ee92f023d47bfe stopped}: state = "stopped", want "paused"
	I0103 20:36:53.047744  545961 cri.go:129] container: {ID:d22ad169c567e2011ce0d9196523e4a33db859f3ebadfc4f50c24d4372357ae0 Status:stopped}
	I0103 20:36:53.047777  545961 cri.go:135] skipping {d22ad169c567e2011ce0d9196523e4a33db859f3ebadfc4f50c24d4372357ae0 stopped}: state = "stopped", want "paused"
	I0103 20:36:53.047802  545961 cri.go:129] container: {ID:fe332dccccd3cfdf9eb22fda618573d4ce50f4b9a7d78741ca394f0248e4a1bb Status:stopped}
	I0103 20:36:53.047821  545961 cri.go:135] skipping {fe332dccccd3cfdf9eb22fda618573d4ce50f4b9a7d78741ca394f0248e4a1bb stopped}: state = "stopped", want "paused"
	I0103 20:36:53.047910  545961 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:36:53.059269  545961 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0103 20:36:53.059295  545961 kubeadm.go:636] restartCluster start
	I0103 20:36:53.059357  545961 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0103 20:36:53.070152  545961 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:36:53.070976  545961 kubeconfig.go:92] found "pause-589189" server: "https://192.168.76.2:8443"
	I0103 20:36:53.072019  545961 kapi.go:59] client config for pause-589189: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/profiles/pause-589189/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/profiles/pause-589189/client.key", CAFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bfa00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 20:36:53.072887  545961 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0103 20:36:53.084281  545961 api_server.go:166] Checking apiserver status ...
	I0103 20:36:53.084349  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:36:53.096709  545961 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:36:53.584962  545961 api_server.go:166] Checking apiserver status ...
	I0103 20:36:53.585073  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:36:53.597606  545961 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:36:54.084930  545961 api_server.go:166] Checking apiserver status ...
	I0103 20:36:54.085121  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:36:54.098113  545961 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:36:54.584425  545961 api_server.go:166] Checking apiserver status ...
	I0103 20:36:54.584515  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:36:54.597593  545961 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:36:55.085219  545961 api_server.go:166] Checking apiserver status ...
	I0103 20:36:55.085371  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:36:55.100080  545961 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:36:55.584425  545961 api_server.go:166] Checking apiserver status ...
	I0103 20:36:55.584516  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:36:55.597508  545961 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:36:56.085185  545961 api_server.go:166] Checking apiserver status ...
	I0103 20:36:56.085313  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:36:56.101274  545961 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:36:56.584415  545961 api_server.go:166] Checking apiserver status ...
	I0103 20:36:56.584551  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:36:56.599226  545961 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:36:57.084790  545961 api_server.go:166] Checking apiserver status ...
	I0103 20:36:57.084912  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:36:57.098026  545961 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:36:57.584496  545961 api_server.go:166] Checking apiserver status ...
	I0103 20:36:57.584611  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:36:57.596820  545961 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:36:58.084333  545961 api_server.go:166] Checking apiserver status ...
	I0103 20:36:58.084442  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:36:58.101289  545961 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:36:58.584560  545961 api_server.go:166] Checking apiserver status ...
	I0103 20:36:58.584666  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:36:58.597602  545961 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:36:59.085227  545961 api_server.go:166] Checking apiserver status ...
	I0103 20:36:59.085311  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:36:59.097456  545961 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:36:59.585110  545961 api_server.go:166] Checking apiserver status ...
	I0103 20:36:59.585217  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:36:59.597652  545961 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:37:00.098102  545961 api_server.go:166] Checking apiserver status ...
	I0103 20:37:00.098209  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:37:00.170091  545961 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:37:00.584427  545961 api_server.go:166] Checking apiserver status ...
	I0103 20:37:00.584549  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:37:00.598002  545961 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:37:01.084605  545961 api_server.go:166] Checking apiserver status ...
	I0103 20:37:01.084707  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:37:01.097041  545961 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:37:01.584424  545961 api_server.go:166] Checking apiserver status ...
	I0103 20:37:01.584538  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:37:01.598533  545961 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:37:02.084694  545961 api_server.go:166] Checking apiserver status ...
	I0103 20:37:02.084799  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:37:02.097849  545961 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:37:02.584822  545961 api_server.go:166] Checking apiserver status ...
	I0103 20:37:02.584906  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:37:02.598458  545961 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:37:03.085038  545961 api_server.go:166] Checking apiserver status ...
	I0103 20:37:03.085138  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0103 20:37:03.106395  545961 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:37:03.106420  545961 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0103 20:37:03.106443  545961 kubeadm.go:1135] stopping kube-system containers ...
	I0103 20:37:03.106453  545961 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0103 20:37:03.106533  545961 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:37:03.186145  545961 cri.go:89] found id: "0e822a87ca22ecbd73e32a4bb31f706833c88c097de26c88b332a4865097c93f"
	I0103 20:37:03.186170  545961 cri.go:89] found id: "fe332dccccd3cfdf9eb22fda618573d4ce50f4b9a7d78741ca394f0248e4a1bb"
	I0103 20:37:03.186181  545961 cri.go:89] found id: "d22ad169c567e2011ce0d9196523e4a33db859f3ebadfc4f50c24d4372357ae0"
	I0103 20:37:03.186185  545961 cri.go:89] found id: "85eb04634a23fe2f0cabe61dd973cbc0b3efaf531040055e52ee92f023d47bfe"
	I0103 20:37:03.186190  545961 cri.go:89] found id: "78f150ce8a96bcfa7c3d36b4aab3838e2634c219621756b6d4ce3911e540b5a8"
	I0103 20:37:03.186198  545961 cri.go:89] found id: "48229eac57ee1c0f3d9f6bc28113a1e66b5ba7e3490f1f786830c924316737c5"
	I0103 20:37:03.186206  545961 cri.go:89] found id: "0b4de2b22c7b4724729f6e7fb8c0e390ff80431329c90ae0cc8d0a069bcdfb8d"
	I0103 20:37:03.186210  545961 cri.go:89] found id: "073d47a61187f01ab01df5981bf363672c9b8fe4f93af6e737cdcbb14709924b"
	I0103 20:37:03.186218  545961 cri.go:89] found id: ""
	I0103 20:37:03.186224  545961 cri.go:234] Stopping containers: [0e822a87ca22ecbd73e32a4bb31f706833c88c097de26c88b332a4865097c93f fe332dccccd3cfdf9eb22fda618573d4ce50f4b9a7d78741ca394f0248e4a1bb d22ad169c567e2011ce0d9196523e4a33db859f3ebadfc4f50c24d4372357ae0 85eb04634a23fe2f0cabe61dd973cbc0b3efaf531040055e52ee92f023d47bfe 78f150ce8a96bcfa7c3d36b4aab3838e2634c219621756b6d4ce3911e540b5a8 48229eac57ee1c0f3d9f6bc28113a1e66b5ba7e3490f1f786830c924316737c5 0b4de2b22c7b4724729f6e7fb8c0e390ff80431329c90ae0cc8d0a069bcdfb8d 073d47a61187f01ab01df5981bf363672c9b8fe4f93af6e737cdcbb14709924b]
	I0103 20:37:03.186309  545961 ssh_runner.go:195] Run: which crictl
	I0103 20:37:03.193312  545961 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 0e822a87ca22ecbd73e32a4bb31f706833c88c097de26c88b332a4865097c93f fe332dccccd3cfdf9eb22fda618573d4ce50f4b9a7d78741ca394f0248e4a1bb d22ad169c567e2011ce0d9196523e4a33db859f3ebadfc4f50c24d4372357ae0 85eb04634a23fe2f0cabe61dd973cbc0b3efaf531040055e52ee92f023d47bfe 78f150ce8a96bcfa7c3d36b4aab3838e2634c219621756b6d4ce3911e540b5a8 48229eac57ee1c0f3d9f6bc28113a1e66b5ba7e3490f1f786830c924316737c5 0b4de2b22c7b4724729f6e7fb8c0e390ff80431329c90ae0cc8d0a069bcdfb8d 073d47a61187f01ab01df5981bf363672c9b8fe4f93af6e737cdcbb14709924b
	I0103 20:37:03.439928  545961 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0103 20:37:03.546346  545961 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:37:03.558231  545961 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan  3 20:35 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan  3 20:35 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Jan  3 20:35 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan  3 20:35 /etc/kubernetes/scheduler.conf
	
	I0103 20:37:03.558302  545961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0103 20:37:03.570379  545961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0103 20:37:03.581923  545961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0103 20:37:03.593604  545961 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:37:03.593675  545961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0103 20:37:03.604644  545961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0103 20:37:03.615888  545961 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0103 20:37:03.616031  545961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0103 20:37:03.627298  545961 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:37:03.638735  545961 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0103 20:37:03.638759  545961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:37:03.720514  545961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:37:06.977511  545961 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.256960874s)
	I0103 20:37:06.977542  545961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:37:07.303856  545961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:37:07.591885  545961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:37:07.803050  545961 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:37:07.803158  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:37:08.303249  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:37:08.803771  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:37:08.835170  545961 api_server.go:72] duration metric: took 1.032120108s to wait for apiserver process to appear ...
	I0103 20:37:08.835193  545961 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:37:08.835212  545961 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0103 20:37:08.835482  545961 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0103 20:37:09.335718  545961 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0103 20:37:14.336246  545961 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0103 20:37:14.336303  545961 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0103 20:37:16.749007  545961 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:37:16.749037  545961 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:37:16.749051  545961 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0103 20:37:16.880175  545961 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0103 20:37:16.880209  545961 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0103 20:37:16.880223  545961 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0103 20:37:16.975460  545961 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:37:16.975490  545961 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:37:17.335667  545961 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0103 20:37:17.345282  545961 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:37:17.345313  545961 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:37:17.835688  545961 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0103 20:37:17.933829  545961 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:37:17.933869  545961 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:37:18.336324  545961 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0103 20:37:18.346326  545961 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0103 20:37:18.346400  545961 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0103 20:37:18.835690  545961 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0103 20:37:18.882273  545961 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0103 20:37:18.918039  545961 api_server.go:141] control plane version: v1.28.4
	I0103 20:37:18.918066  545961 api_server.go:131] duration metric: took 10.082867395s to wait for apiserver health ...
	I0103 20:37:18.918076  545961 cni.go:84] Creating CNI manager for ""
	I0103 20:37:18.918083  545961 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 20:37:18.921017  545961 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0103 20:37:18.925977  545961 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0103 20:37:18.939223  545961 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0103 20:37:18.939243  545961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0103 20:37:18.986115  545961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0103 20:37:19.934069  545961 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:37:19.944219  545961 system_pods.go:59] 7 kube-system pods found
	I0103 20:37:19.944249  545961 system_pods.go:61] "coredns-5dd5756b68-q766p" [88099227-8f36-44d9-b01c-1d8a5fca054a] Running
	I0103 20:37:19.944266  545961 system_pods.go:61] "etcd-pause-589189" [17cf6adf-0fad-4a34-b8fe-b2560e398e68] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 20:37:19.944279  545961 system_pods.go:61] "kindnet-xh476" [16420a9b-d68e-4a16-84d7-e6344f3b9f27] Running
	I0103 20:37:19.944295  545961 system_pods.go:61] "kube-apiserver-pause-589189" [d3e43690-d065-4288-a1f7-795767f523a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 20:37:19.944304  545961 system_pods.go:61] "kube-controller-manager-pause-589189" [3f7f4884-9a81-4f79-b9e6-9241b8d09840] Running
	I0103 20:37:19.944310  545961 system_pods.go:61] "kube-proxy-qptr2" [a55774e9-f310-4e29-be2d-81f71022a59b] Running
	I0103 20:37:19.944317  545961 system_pods.go:61] "kube-scheduler-pause-589189" [0efb433d-48a7-44d8-a799-e4be9725e28b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 20:37:19.944326  545961 system_pods.go:74] duration metric: took 10.238279ms to wait for pod list to return data ...
	I0103 20:37:19.944345  545961 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:37:19.948540  545961 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0103 20:37:19.949047  545961 node_conditions.go:123] node cpu capacity is 2
	I0103 20:37:19.949062  545961 node_conditions.go:105] duration metric: took 4.707765ms to run NodePressure ...
	I0103 20:37:19.949089  545961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:37:20.200831  545961 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 20:37:20.208962  545961 kubeadm.go:787] kubelet initialised
	I0103 20:37:20.208989  545961 kubeadm.go:788] duration metric: took 8.134367ms waiting for restarted kubelet to initialise ...
	I0103 20:37:20.208999  545961 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:37:20.216768  545961 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-q766p" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:20.225478  545961 pod_ready.go:92] pod "coredns-5dd5756b68-q766p" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:20.225505  545961 pod_ready.go:81] duration metric: took 8.703655ms waiting for pod "coredns-5dd5756b68-q766p" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:20.225521  545961 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:22.233525  545961 pod_ready.go:102] pod "etcd-pause-589189" in "kube-system" namespace has status "Ready":"False"
	I0103 20:37:24.732431  545961 pod_ready.go:102] pod "etcd-pause-589189" in "kube-system" namespace has status "Ready":"False"
	I0103 20:37:26.232731  545961 pod_ready.go:92] pod "etcd-pause-589189" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:26.232751  545961 pod_ready.go:81] duration metric: took 6.007221011s waiting for pod "etcd-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.232765  545961 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.240943  545961 pod_ready.go:92] pod "kube-apiserver-pause-589189" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:26.240963  545961 pod_ready.go:81] duration metric: took 8.191096ms waiting for pod "kube-apiserver-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.240974  545961 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.254218  545961 pod_ready.go:92] pod "kube-controller-manager-pause-589189" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:26.254284  545961 pod_ready.go:81] duration metric: took 13.301408ms waiting for pod "kube-controller-manager-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.254311  545961 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qptr2" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.260999  545961 pod_ready.go:92] pod "kube-proxy-qptr2" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:26.261067  545961 pod_ready.go:81] duration metric: took 6.726059ms waiting for pod "kube-proxy-qptr2" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.261091  545961 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.271995  545961 pod_ready.go:92] pod "kube-scheduler-pause-589189" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:26.272063  545961 pod_ready.go:81] duration metric: took 10.948596ms waiting for pod "kube-scheduler-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.272088  545961 pod_ready.go:38] duration metric: took 6.063077977s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:37:26.272132  545961 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:37:26.285009  545961 ops.go:34] apiserver oom_adj: -16
	I0103 20:37:26.285080  545961 kubeadm.go:640] restartCluster took 33.225776576s
	I0103 20:37:26.285102  545961 kubeadm.go:406] StartCluster complete in 33.309561555s
	I0103 20:37:26.285130  545961 settings.go:142] acquiring lock: {Name:mk35e0b2d8071191a72193c66ba9549131012420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:37:26.285219  545961 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 20:37:26.287070  545961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/kubeconfig: {Name:mkcf9b222e1b36afc1c2e4e412234b0c105c9bc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:37:26.287399  545961 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 20:37:26.287749  545961 config.go:182] Loaded profile config "pause-589189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:37:26.288040  545961 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:37:26.290171  545961 out.go:177] * Enabled addons: 
	I0103 20:37:26.289023  545961 kapi.go:59] client config for pause-589189: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/profiles/pause-589189/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/profiles/pause-589189/client.key", CAFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bfa00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 20:37:26.294747  545961 addons.go:508] enable addons completed in 6.723835ms: enabled=[]
	I0103 20:37:26.299312  545961 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-589189" context rescaled to 1 replicas
	I0103 20:37:26.299386  545961 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:37:26.302597  545961 out.go:177] * Verifying Kubernetes components...
	I0103 20:37:26.304313  545961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:37:26.490971  545961 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 20:37:26.491025  545961 node_ready.go:35] waiting up to 6m0s for node "pause-589189" to be "Ready" ...
	I0103 20:37:26.499892  545961 node_ready.go:49] node "pause-589189" has status "Ready":"True"
	I0103 20:37:26.499920  545961 node_ready.go:38] duration metric: took 8.881269ms waiting for node "pause-589189" to be "Ready" ...
	I0103 20:37:26.499933  545961 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:37:26.632393  545961 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-q766p" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:27.034670  545961 pod_ready.go:92] pod "coredns-5dd5756b68-q766p" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:27.034702  545961 pod_ready.go:81] duration metric: took 402.276796ms waiting for pod "coredns-5dd5756b68-q766p" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:27.034714  545961 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:27.431905  545961 pod_ready.go:92] pod "etcd-pause-589189" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:27.431932  545961 pod_ready.go:81] duration metric: took 397.20932ms waiting for pod "etcd-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:27.431950  545961 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:27.831111  545961 pod_ready.go:92] pod "kube-apiserver-pause-589189" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:27.831146  545961 pod_ready.go:81] duration metric: took 399.185973ms waiting for pod "kube-apiserver-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:27.831160  545961 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:28.230301  545961 pod_ready.go:92] pod "kube-controller-manager-pause-589189" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:28.230325  545961 pod_ready.go:81] duration metric: took 399.146904ms waiting for pod "kube-controller-manager-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:28.230343  545961 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qptr2" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:28.630376  545961 pod_ready.go:92] pod "kube-proxy-qptr2" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:28.630398  545961 pod_ready.go:81] duration metric: took 400.047144ms waiting for pod "kube-proxy-qptr2" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:28.630410  545961 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:29.029577  545961 pod_ready.go:92] pod "kube-scheduler-pause-589189" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:29.029601  545961 pod_ready.go:81] duration metric: took 399.183648ms waiting for pod "kube-scheduler-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:29.029610  545961 pod_ready.go:38] duration metric: took 2.529663008s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:37:29.029625  545961 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:37:29.029688  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:37:29.043780  545961 api_server.go:72] duration metric: took 2.744350093s to wait for apiserver process to appear ...
	I0103 20:37:29.043806  545961 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:37:29.043825  545961 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0103 20:37:29.052906  545961 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0103 20:37:29.054294  545961 api_server.go:141] control plane version: v1.28.4
	I0103 20:37:29.054318  545961 api_server.go:131] duration metric: took 10.505189ms to wait for apiserver health ...
	I0103 20:37:29.054328  545961 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:37:29.234659  545961 system_pods.go:59] 7 kube-system pods found
	I0103 20:37:29.234734  545961 system_pods.go:61] "coredns-5dd5756b68-q766p" [88099227-8f36-44d9-b01c-1d8a5fca054a] Running
	I0103 20:37:29.234753  545961 system_pods.go:61] "etcd-pause-589189" [17cf6adf-0fad-4a34-b8fe-b2560e398e68] Running
	I0103 20:37:29.234776  545961 system_pods.go:61] "kindnet-xh476" [16420a9b-d68e-4a16-84d7-e6344f3b9f27] Running
	I0103 20:37:29.234806  545961 system_pods.go:61] "kube-apiserver-pause-589189" [d3e43690-d065-4288-a1f7-795767f523a3] Running
	I0103 20:37:29.234830  545961 system_pods.go:61] "kube-controller-manager-pause-589189" [3f7f4884-9a81-4f79-b9e6-9241b8d09840] Running
	I0103 20:37:29.234849  545961 system_pods.go:61] "kube-proxy-qptr2" [a55774e9-f310-4e29-be2d-81f71022a59b] Running
	I0103 20:37:29.234868  545961 system_pods.go:61] "kube-scheduler-pause-589189" [0efb433d-48a7-44d8-a799-e4be9725e28b] Running
	I0103 20:37:29.234903  545961 system_pods.go:74] duration metric: took 180.56878ms to wait for pod list to return data ...
	I0103 20:37:29.234925  545961 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:37:29.429334  545961 default_sa.go:45] found service account: "default"
	I0103 20:37:29.429428  545961 default_sa.go:55] duration metric: took 194.470442ms for default service account to be created ...
	I0103 20:37:29.429496  545961 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:37:29.644145  545961 system_pods.go:86] 7 kube-system pods found
	I0103 20:37:29.644232  545961 system_pods.go:89] "coredns-5dd5756b68-q766p" [88099227-8f36-44d9-b01c-1d8a5fca054a] Running
	I0103 20:37:29.644255  545961 system_pods.go:89] "etcd-pause-589189" [17cf6adf-0fad-4a34-b8fe-b2560e398e68] Running
	I0103 20:37:29.644281  545961 system_pods.go:89] "kindnet-xh476" [16420a9b-d68e-4a16-84d7-e6344f3b9f27] Running
	I0103 20:37:29.644345  545961 system_pods.go:89] "kube-apiserver-pause-589189" [d3e43690-d065-4288-a1f7-795767f523a3] Running
	I0103 20:37:29.644384  545961 system_pods.go:89] "kube-controller-manager-pause-589189" [3f7f4884-9a81-4f79-b9e6-9241b8d09840] Running
	I0103 20:37:29.644409  545961 system_pods.go:89] "kube-proxy-qptr2" [a55774e9-f310-4e29-be2d-81f71022a59b] Running
	I0103 20:37:29.644431  545961 system_pods.go:89] "kube-scheduler-pause-589189" [0efb433d-48a7-44d8-a799-e4be9725e28b] Running
	I0103 20:37:29.644464  545961 system_pods.go:126] duration metric: took 214.931192ms to wait for k8s-apps to be running ...
	I0103 20:37:29.644489  545961 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:37:29.644595  545961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:37:29.675819  545961 system_svc.go:56] duration metric: took 31.320051ms WaitForService to wait for kubelet.
	I0103 20:37:29.675843  545961 kubeadm.go:581] duration metric: took 3.376420632s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:37:29.675862  545961 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:37:29.831502  545961 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0103 20:37:29.831665  545961 node_conditions.go:123] node cpu capacity is 2
	I0103 20:37:29.831687  545961 node_conditions.go:105] duration metric: took 155.81849ms to run NodePressure ...
	I0103 20:37:29.831707  545961 start.go:228] waiting for startup goroutines ...
	I0103 20:37:29.831714  545961 start.go:233] waiting for cluster config update ...
	I0103 20:37:29.831725  545961 start.go:242] writing updated cluster config ...
	I0103 20:37:29.832091  545961 ssh_runner.go:195] Run: rm -f paused
	I0103 20:37:29.935271  545961 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0103 20:37:29.937646  545961 out.go:177] * Done! kubectl is now configured to use "pause-589189" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-589189
helpers_test.go:235: (dbg) docker inspect pause-589189:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d8364cccb83a7e386a281d628dd5e921713af6e4882f7ae659b3284a29be602f",
	        "Created": "2024-01-03T20:35:27.244875508Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 540562,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-03T20:35:27.593709617Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3bfff26a1ae256fcdf8f10a333efdefbe26edc5c1669e1cc5c973c016e44d3c4",
	        "ResolvConfPath": "/var/lib/docker/containers/d8364cccb83a7e386a281d628dd5e921713af6e4882f7ae659b3284a29be602f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d8364cccb83a7e386a281d628dd5e921713af6e4882f7ae659b3284a29be602f/hostname",
	        "HostsPath": "/var/lib/docker/containers/d8364cccb83a7e386a281d628dd5e921713af6e4882f7ae659b3284a29be602f/hosts",
	        "LogPath": "/var/lib/docker/containers/d8364cccb83a7e386a281d628dd5e921713af6e4882f7ae659b3284a29be602f/d8364cccb83a7e386a281d628dd5e921713af6e4882f7ae659b3284a29be602f-json.log",
	        "Name": "/pause-589189",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-589189:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-589189",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a980cbfa53518a5d5bdf9f83e740341d898097028da0944279fcce8a6e0e69b9-init/diff:/var/lib/docker/overlay2/0cefd74c13c0ff527608d5d1778b7a3893c62167f91a1554bd1fa9cb8110135e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a980cbfa53518a5d5bdf9f83e740341d898097028da0944279fcce8a6e0e69b9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a980cbfa53518a5d5bdf9f83e740341d898097028da0944279fcce8a6e0e69b9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a980cbfa53518a5d5bdf9f83e740341d898097028da0944279fcce8a6e0e69b9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-589189",
	                "Source": "/var/lib/docker/volumes/pause-589189/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-589189",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-589189",
	                "name.minikube.sigs.k8s.io": "pause-589189",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3f8cc0c4e993404d76afcec9e8af7aa7dcb02c3e0d34f9748fdb8ab61e011267",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33294"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33293"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33290"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33292"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33291"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3f8cc0c4e993",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-589189": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d8364cccb83a",
	                        "pause-589189"
	                    ],
	                    "NetworkID": "27ca36c6af97555a43f6834e15eb68ffbf6196ad012d41d49626e0d2a307976b",
	                    "EndpointID": "7cf6919bb821dcbf697982305521c9297658da84417488f888d3fe66b331e119",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-589189 -n pause-589189
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-589189 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-589189 logs -n 25: (2.379972489s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-301144            | NoKubernetes-301144       | jenkins | v1.32.0 | 03 Jan 24 20:29 UTC |                     |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20         |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-301144            | NoKubernetes-301144       | jenkins | v1.32.0 | 03 Jan 24 20:29 UTC | 03 Jan 24 20:29 UTC |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-301144            | NoKubernetes-301144       | jenkins | v1.32.0 | 03 Jan 24 20:29 UTC | 03 Jan 24 20:29 UTC |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-301144            | NoKubernetes-301144       | jenkins | v1.32.0 | 03 Jan 24 20:29 UTC | 03 Jan 24 20:29 UTC |
	| start   | -p NoKubernetes-301144            | NoKubernetes-301144       | jenkins | v1.32.0 | 03 Jan 24 20:29 UTC | 03 Jan 24 20:30 UTC |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-301144 sudo       | NoKubernetes-301144       | jenkins | v1.32.0 | 03 Jan 24 20:30 UTC |                     |
	|         | systemctl is-active --quiet       |                           |         |         |                     |                     |
	|         | service kubelet                   |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-301144            | NoKubernetes-301144       | jenkins | v1.32.0 | 03 Jan 24 20:30 UTC | 03 Jan 24 20:30 UTC |
	| start   | -p NoKubernetes-301144            | NoKubernetes-301144       | jenkins | v1.32.0 | 03 Jan 24 20:30 UTC | 03 Jan 24 20:30 UTC |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-301144 sudo       | NoKubernetes-301144       | jenkins | v1.32.0 | 03 Jan 24 20:30 UTC |                     |
	|         | systemctl is-active --quiet       |                           |         |         |                     |                     |
	|         | service kubelet                   |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-301144            | NoKubernetes-301144       | jenkins | v1.32.0 | 03 Jan 24 20:30 UTC | 03 Jan 24 20:30 UTC |
	| start   | -p kubernetes-upgrade-753304      | kubernetes-upgrade-753304 | jenkins | v1.32.0 | 03 Jan 24 20:30 UTC | 03 Jan 24 20:31 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-753304      | kubernetes-upgrade-753304 | jenkins | v1.32.0 | 03 Jan 24 20:31 UTC | 03 Jan 24 20:31 UTC |
	| start   | -p kubernetes-upgrade-753304      | kubernetes-upgrade-753304 | jenkins | v1.32.0 | 03 Jan 24 20:31 UTC | 03 Jan 24 20:36 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-108038         | missing-upgrade-108038    | jenkins | v1.32.0 | 03 Jan 24 20:31 UTC |                     |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p missing-upgrade-108038         | missing-upgrade-108038    | jenkins | v1.32.0 | 03 Jan 24 20:32 UTC | 03 Jan 24 20:32 UTC |
	| start   | -p stopped-upgrade-077088         | stopped-upgrade-077088    | jenkins | v1.32.0 | 03 Jan 24 20:33 UTC |                     |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-077088         | stopped-upgrade-077088    | jenkins | v1.32.0 | 03 Jan 24 20:33 UTC | 03 Jan 24 20:33 UTC |
	| start   | -p running-upgrade-251987         | running-upgrade-251987    | jenkins | v1.32.0 | 03 Jan 24 20:35 UTC |                     |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-251987         | running-upgrade-251987    | jenkins | v1.32.0 | 03 Jan 24 20:35 UTC | 03 Jan 24 20:35 UTC |
	| start   | -p pause-589189 --memory=2048     | pause-589189              | jenkins | v1.32.0 | 03 Jan 24 20:35 UTC | 03 Jan 24 20:36 UTC |
	|         | --install-addons=false            |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker        |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-753304      | kubernetes-upgrade-753304 | jenkins | v1.32.0 | 03 Jan 24 20:36 UTC |                     |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-753304      | kubernetes-upgrade-753304 | jenkins | v1.32.0 | 03 Jan 24 20:36 UTC | 03 Jan 24 20:37 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p pause-589189                   | pause-589189              | jenkins | v1.32.0 | 03 Jan 24 20:36 UTC | 03 Jan 24 20:37 UTC |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-753304      | kubernetes-upgrade-753304 | jenkins | v1.32.0 | 03 Jan 24 20:37 UTC | 03 Jan 24 20:37 UTC |
	| start   | -p force-systemd-flag-518436      | force-systemd-flag-518436 | jenkins | v1.32.0 | 03 Jan 24 20:37 UTC |                     |
	|         | --memory=2048 --force-systemd     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 20:37:20
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 20:37:20.002609  548415 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:37:20.002854  548415 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:37:20.002882  548415 out.go:309] Setting ErrFile to fd 2...
	I0103 20:37:20.002913  548415 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:37:20.003214  548415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-409390/.minikube/bin
	I0103 20:37:20.003747  548415 out.go:303] Setting JSON to false
	I0103 20:37:20.004913  548415 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":8389,"bootTime":1704305851,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0103 20:37:20.005043  548415 start.go:138] virtualization:  
	I0103 20:37:20.009254  548415 out.go:177] * [force-systemd-flag-518436] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0103 20:37:20.014413  548415 notify.go:220] Checking for updates...
	I0103 20:37:20.014370  548415 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 20:37:20.018065  548415 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 20:37:20.020242  548415 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 20:37:20.022466  548415 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube
	I0103 20:37:20.024364  548415 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0103 20:37:20.026332  548415 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 20:37:20.028865  548415 config.go:182] Loaded profile config "pause-589189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:37:20.029066  548415 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 20:37:20.057661  548415 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 20:37:20.057784  548415 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 20:37:20.207998  548415 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-03 20:37:20.196797244 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 20:37:20.208096  548415 docker.go:295] overlay module found
	I0103 20:37:20.210398  548415 out.go:177] * Using the docker driver based on user configuration
	I0103 20:37:20.212312  548415 start.go:298] selected driver: docker
	I0103 20:37:20.212346  548415 start.go:902] validating driver "docker" against <nil>
	I0103 20:37:20.212360  548415 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 20:37:20.213097  548415 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 20:37:20.294262  548415 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-03 20:37:20.284881646 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 20:37:20.294427  548415 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0103 20:37:20.294711  548415 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0103 20:37:20.296468  548415 out.go:177] * Using Docker driver with root privileges
	I0103 20:37:20.298323  548415 cni.go:84] Creating CNI manager for ""
	I0103 20:37:20.298346  548415 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 20:37:20.298357  548415 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0103 20:37:20.298376  548415 start_flags.go:323] config:
	{Name:force-systemd-flag-518436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-518436 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:37:20.300631  548415 out.go:177] * Starting control plane node force-systemd-flag-518436 in cluster force-systemd-flag-518436
	I0103 20:37:20.302590  548415 cache.go:121] Beginning downloading kic base image for docker with crio
	I0103 20:37:20.304549  548415 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0103 20:37:20.306729  548415 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:37:20.306780  548415 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0103 20:37:20.306801  548415 cache.go:56] Caching tarball of preloaded images
	I0103 20:37:20.306830  548415 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 20:37:20.306885  548415 preload.go:174] Found /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0103 20:37:20.306895  548415 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0103 20:37:20.307004  548415 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/config.json ...
	I0103 20:37:20.307025  548415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/config.json: {Name:mkf962c40932403ce78465e3b38cd3cdb374293b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:37:20.325055  548415 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0103 20:37:20.325099  548415 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0103 20:37:20.325113  548415 cache.go:194] Successfully downloaded all kic artifacts
	I0103 20:37:20.325153  548415 start.go:365] acquiring machines lock for force-systemd-flag-518436: {Name:mk53306c96806ea93c3c8cab719a89671f7a5b5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:37:20.325264  548415 start.go:369] acquired machines lock for "force-systemd-flag-518436" in 90.78µs
	I0103 20:37:20.325295  548415 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-518436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-518436 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:37:20.325465  548415 start.go:125] createHost starting for "" (driver="docker")
	I0103 20:37:18.925977  545961 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0103 20:37:18.939223  545961 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0103 20:37:18.939243  545961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0103 20:37:18.986115  545961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0103 20:37:19.934069  545961 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:37:19.944219  545961 system_pods.go:59] 7 kube-system pods found
	I0103 20:37:19.944249  545961 system_pods.go:61] "coredns-5dd5756b68-q766p" [88099227-8f36-44d9-b01c-1d8a5fca054a] Running
	I0103 20:37:19.944266  545961 system_pods.go:61] "etcd-pause-589189" [17cf6adf-0fad-4a34-b8fe-b2560e398e68] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 20:37:19.944279  545961 system_pods.go:61] "kindnet-xh476" [16420a9b-d68e-4a16-84d7-e6344f3b9f27] Running
	I0103 20:37:19.944295  545961 system_pods.go:61] "kube-apiserver-pause-589189" [d3e43690-d065-4288-a1f7-795767f523a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 20:37:19.944304  545961 system_pods.go:61] "kube-controller-manager-pause-589189" [3f7f4884-9a81-4f79-b9e6-9241b8d09840] Running
	I0103 20:37:19.944310  545961 system_pods.go:61] "kube-proxy-qptr2" [a55774e9-f310-4e29-be2d-81f71022a59b] Running
	I0103 20:37:19.944317  545961 system_pods.go:61] "kube-scheduler-pause-589189" [0efb433d-48a7-44d8-a799-e4be9725e28b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 20:37:19.944326  545961 system_pods.go:74] duration metric: took 10.238279ms to wait for pod list to return data ...
	I0103 20:37:19.944345  545961 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:37:19.948540  545961 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0103 20:37:19.949047  545961 node_conditions.go:123] node cpu capacity is 2
	I0103 20:37:19.949062  545961 node_conditions.go:105] duration metric: took 4.707765ms to run NodePressure ...
	I0103 20:37:19.949089  545961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:37:20.200831  545961 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 20:37:20.208962  545961 kubeadm.go:787] kubelet initialised
	I0103 20:37:20.208989  545961 kubeadm.go:788] duration metric: took 8.134367ms waiting for restarted kubelet to initialise ...
	I0103 20:37:20.208999  545961 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:37:20.216768  545961 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-q766p" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:20.225478  545961 pod_ready.go:92] pod "coredns-5dd5756b68-q766p" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:20.225505  545961 pod_ready.go:81] duration metric: took 8.703655ms waiting for pod "coredns-5dd5756b68-q766p" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:20.225521  545961 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:22.233525  545961 pod_ready.go:102] pod "etcd-pause-589189" in "kube-system" namespace has status "Ready":"False"
	I0103 20:37:20.329577  548415 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0103 20:37:20.329836  548415 start.go:159] libmachine.API.Create for "force-systemd-flag-518436" (driver="docker")
	I0103 20:37:20.329886  548415 client.go:168] LocalClient.Create starting
	I0103 20:37:20.329982  548415 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem
	I0103 20:37:20.330040  548415 main.go:141] libmachine: Decoding PEM data...
	I0103 20:37:20.330060  548415 main.go:141] libmachine: Parsing certificate...
	I0103 20:37:20.330115  548415 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem
	I0103 20:37:20.330137  548415 main.go:141] libmachine: Decoding PEM data...
	I0103 20:37:20.330153  548415 main.go:141] libmachine: Parsing certificate...
	I0103 20:37:20.330577  548415 cli_runner.go:164] Run: docker network inspect force-systemd-flag-518436 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0103 20:37:20.348060  548415 cli_runner.go:211] docker network inspect force-systemd-flag-518436 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0103 20:37:20.348146  548415 network_create.go:281] running [docker network inspect force-systemd-flag-518436] to gather additional debugging logs...
	I0103 20:37:20.348167  548415 cli_runner.go:164] Run: docker network inspect force-systemd-flag-518436
	W0103 20:37:20.370568  548415 cli_runner.go:211] docker network inspect force-systemd-flag-518436 returned with exit code 1
	I0103 20:37:20.370609  548415 network_create.go:284] error running [docker network inspect force-systemd-flag-518436]: docker network inspect force-systemd-flag-518436: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-518436 not found
	I0103 20:37:20.370633  548415 network_create.go:286] output of [docker network inspect force-systemd-flag-518436]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-518436 not found
	
	** /stderr **
	I0103 20:37:20.370737  548415 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0103 20:37:20.389286  548415 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e48a1c7f0405 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:af:08:39:14} reservation:<nil>}
	I0103 20:37:20.389834  548415 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5ad9a395bb96 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:d1:45:f6:7e} reservation:<nil>}
	I0103 20:37:20.390465  548415 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025a65d0}
	I0103 20:37:20.390504  548415 network_create.go:124] attempt to create docker network force-systemd-flag-518436 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0103 20:37:20.390609  548415 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-518436 force-systemd-flag-518436
	I0103 20:37:20.472365  548415 network_create.go:108] docker network force-systemd-flag-518436 192.168.67.0/24 created
	I0103 20:37:20.472394  548415 kic.go:121] calculated static IP "192.168.67.2" for the "force-systemd-flag-518436" container
	I0103 20:37:20.472474  548415 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0103 20:37:20.492093  548415 cli_runner.go:164] Run: docker volume create force-systemd-flag-518436 --label name.minikube.sigs.k8s.io=force-systemd-flag-518436 --label created_by.minikube.sigs.k8s.io=true
	I0103 20:37:20.512591  548415 oci.go:103] Successfully created a docker volume force-systemd-flag-518436
	I0103 20:37:20.512680  548415 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-518436-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-518436 --entrypoint /usr/bin/test -v force-systemd-flag-518436:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I0103 20:37:21.156931  548415 oci.go:107] Successfully prepared a docker volume force-systemd-flag-518436
	I0103 20:37:21.156992  548415 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:37:21.157013  548415 kic.go:194] Starting extracting preloaded images to volume ...
	I0103 20:37:21.157103  548415 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-518436:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I0103 20:37:24.732431  545961 pod_ready.go:102] pod "etcd-pause-589189" in "kube-system" namespace has status "Ready":"False"
	I0103 20:37:26.232731  545961 pod_ready.go:92] pod "etcd-pause-589189" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:26.232751  545961 pod_ready.go:81] duration metric: took 6.007221011s waiting for pod "etcd-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.232765  545961 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.240943  545961 pod_ready.go:92] pod "kube-apiserver-pause-589189" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:26.240963  545961 pod_ready.go:81] duration metric: took 8.191096ms waiting for pod "kube-apiserver-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.240974  545961 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.254218  545961 pod_ready.go:92] pod "kube-controller-manager-pause-589189" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:26.254284  545961 pod_ready.go:81] duration metric: took 13.301408ms waiting for pod "kube-controller-manager-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.254311  545961 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qptr2" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.260999  545961 pod_ready.go:92] pod "kube-proxy-qptr2" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:26.261067  545961 pod_ready.go:81] duration metric: took 6.726059ms waiting for pod "kube-proxy-qptr2" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.261091  545961 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.271995  545961 pod_ready.go:92] pod "kube-scheduler-pause-589189" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:26.272063  545961 pod_ready.go:81] duration metric: took 10.948596ms waiting for pod "kube-scheduler-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.272088  545961 pod_ready.go:38] duration metric: took 6.063077977s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:37:26.272132  545961 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:37:26.285009  545961 ops.go:34] apiserver oom_adj: -16
	I0103 20:37:26.285080  545961 kubeadm.go:640] restartCluster took 33.225776576s
	I0103 20:37:26.285102  545961 kubeadm.go:406] StartCluster complete in 33.309561555s
	I0103 20:37:26.285130  545961 settings.go:142] acquiring lock: {Name:mk35e0b2d8071191a72193c66ba9549131012420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:37:26.285219  545961 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 20:37:26.287070  545961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/kubeconfig: {Name:mkcf9b222e1b36afc1c2e4e412234b0c105c9bc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:37:26.287399  545961 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 20:37:26.287749  545961 config.go:182] Loaded profile config "pause-589189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:37:26.288040  545961 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:37:26.290171  545961 out.go:177] * Enabled addons: 
	I0103 20:37:26.289023  545961 kapi.go:59] client config for pause-589189: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/profiles/pause-589189/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/profiles/pause-589189/client.key", CAFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bfa00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 20:37:26.294747  545961 addons.go:508] enable addons completed in 6.723835ms: enabled=[]
	I0103 20:37:26.299312  545961 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-589189" context rescaled to 1 replicas
	I0103 20:37:26.299386  545961 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:37:26.302597  545961 out.go:177] * Verifying Kubernetes components...
	I0103 20:37:26.304313  545961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:37:26.490971  545961 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 20:37:26.491025  545961 node_ready.go:35] waiting up to 6m0s for node "pause-589189" to be "Ready" ...
	I0103 20:37:26.499892  545961 node_ready.go:49] node "pause-589189" has status "Ready":"True"
	I0103 20:37:26.499920  545961 node_ready.go:38] duration metric: took 8.881269ms waiting for node "pause-589189" to be "Ready" ...
	I0103 20:37:26.499933  545961 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:37:26.632393  545961 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-q766p" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:27.034670  545961 pod_ready.go:92] pod "coredns-5dd5756b68-q766p" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:27.034702  545961 pod_ready.go:81] duration metric: took 402.276796ms waiting for pod "coredns-5dd5756b68-q766p" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:27.034714  545961 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:27.431905  545961 pod_ready.go:92] pod "etcd-pause-589189" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:27.431932  545961 pod_ready.go:81] duration metric: took 397.20932ms waiting for pod "etcd-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:27.431950  545961 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:27.831111  545961 pod_ready.go:92] pod "kube-apiserver-pause-589189" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:27.831146  545961 pod_ready.go:81] duration metric: took 399.185973ms waiting for pod "kube-apiserver-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:27.831160  545961 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:28.230301  545961 pod_ready.go:92] pod "kube-controller-manager-pause-589189" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:28.230325  545961 pod_ready.go:81] duration metric: took 399.146904ms waiting for pod "kube-controller-manager-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:28.230343  545961 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qptr2" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:28.630376  545961 pod_ready.go:92] pod "kube-proxy-qptr2" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:28.630398  545961 pod_ready.go:81] duration metric: took 400.047144ms waiting for pod "kube-proxy-qptr2" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:28.630410  545961 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:29.029577  545961 pod_ready.go:92] pod "kube-scheduler-pause-589189" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:29.029601  545961 pod_ready.go:81] duration metric: took 399.183648ms waiting for pod "kube-scheduler-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:29.029610  545961 pod_ready.go:38] duration metric: took 2.529663008s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:37:29.029625  545961 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:37:29.029688  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:37:29.043780  545961 api_server.go:72] duration metric: took 2.744350093s to wait for apiserver process to appear ...
	I0103 20:37:29.043806  545961 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:37:29.043825  545961 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0103 20:37:29.052906  545961 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0103 20:37:29.054294  545961 api_server.go:141] control plane version: v1.28.4
	I0103 20:37:29.054318  545961 api_server.go:131] duration metric: took 10.505189ms to wait for apiserver health ...
	I0103 20:37:29.054328  545961 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:37:29.234659  545961 system_pods.go:59] 7 kube-system pods found
	I0103 20:37:29.234734  545961 system_pods.go:61] "coredns-5dd5756b68-q766p" [88099227-8f36-44d9-b01c-1d8a5fca054a] Running
	I0103 20:37:29.234753  545961 system_pods.go:61] "etcd-pause-589189" [17cf6adf-0fad-4a34-b8fe-b2560e398e68] Running
	I0103 20:37:29.234776  545961 system_pods.go:61] "kindnet-xh476" [16420a9b-d68e-4a16-84d7-e6344f3b9f27] Running
	I0103 20:37:29.234806  545961 system_pods.go:61] "kube-apiserver-pause-589189" [d3e43690-d065-4288-a1f7-795767f523a3] Running
	I0103 20:37:29.234830  545961 system_pods.go:61] "kube-controller-manager-pause-589189" [3f7f4884-9a81-4f79-b9e6-9241b8d09840] Running
	I0103 20:37:29.234849  545961 system_pods.go:61] "kube-proxy-qptr2" [a55774e9-f310-4e29-be2d-81f71022a59b] Running
	I0103 20:37:29.234868  545961 system_pods.go:61] "kube-scheduler-pause-589189" [0efb433d-48a7-44d8-a799-e4be9725e28b] Running
	I0103 20:37:29.234903  545961 system_pods.go:74] duration metric: took 180.56878ms to wait for pod list to return data ...
	I0103 20:37:29.234925  545961 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:37:29.429334  545961 default_sa.go:45] found service account: "default"
	I0103 20:37:29.429428  545961 default_sa.go:55] duration metric: took 194.470442ms for default service account to be created ...
	I0103 20:37:29.429496  545961 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:37:29.644145  545961 system_pods.go:86] 7 kube-system pods found
	I0103 20:37:29.644232  545961 system_pods.go:89] "coredns-5dd5756b68-q766p" [88099227-8f36-44d9-b01c-1d8a5fca054a] Running
	I0103 20:37:29.644255  545961 system_pods.go:89] "etcd-pause-589189" [17cf6adf-0fad-4a34-b8fe-b2560e398e68] Running
	I0103 20:37:29.644281  545961 system_pods.go:89] "kindnet-xh476" [16420a9b-d68e-4a16-84d7-e6344f3b9f27] Running
	I0103 20:37:29.644345  545961 system_pods.go:89] "kube-apiserver-pause-589189" [d3e43690-d065-4288-a1f7-795767f523a3] Running
	I0103 20:37:29.644384  545961 system_pods.go:89] "kube-controller-manager-pause-589189" [3f7f4884-9a81-4f79-b9e6-9241b8d09840] Running
	I0103 20:37:29.644409  545961 system_pods.go:89] "kube-proxy-qptr2" [a55774e9-f310-4e29-be2d-81f71022a59b] Running
	I0103 20:37:29.644431  545961 system_pods.go:89] "kube-scheduler-pause-589189" [0efb433d-48a7-44d8-a799-e4be9725e28b] Running
	I0103 20:37:29.644464  545961 system_pods.go:126] duration metric: took 214.931192ms to wait for k8s-apps to be running ...
	I0103 20:37:29.644489  545961 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:37:29.644595  545961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:37:29.675819  545961 system_svc.go:56] duration metric: took 31.320051ms WaitForService to wait for kubelet.
	I0103 20:37:29.675843  545961 kubeadm.go:581] duration metric: took 3.376420632s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:37:29.675862  545961 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:37:29.831502  545961 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0103 20:37:29.831665  545961 node_conditions.go:123] node cpu capacity is 2
	I0103 20:37:29.831687  545961 node_conditions.go:105] duration metric: took 155.81849ms to run NodePressure ...
	I0103 20:37:29.831707  545961 start.go:228] waiting for startup goroutines ...
	I0103 20:37:29.831714  545961 start.go:233] waiting for cluster config update ...
	I0103 20:37:29.831725  545961 start.go:242] writing updated cluster config ...
	I0103 20:37:29.832091  545961 ssh_runner.go:195] Run: rm -f paused
	I0103 20:37:29.935271  545961 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0103 20:37:29.937646  545961 out.go:177] * Done! kubectl is now configured to use "pause-589189" cluster and "default" namespace by default
	I0103 20:37:25.569955  548415 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-518436:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (4.412814889s)
	I0103 20:37:25.569985  548415 kic.go:203] duration metric: took 4.412969 seconds to extract preloaded images to volume
	W0103 20:37:25.570125  548415 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0103 20:37:25.570252  548415 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0103 20:37:25.655912  548415 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-518436 --name force-systemd-flag-518436 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-518436 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-518436 --network force-systemd-flag-518436 --ip 192.168.67.2 --volume force-systemd-flag-518436:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0103 20:37:26.032261  548415 cli_runner.go:164] Run: docker container inspect force-systemd-flag-518436 --format={{.State.Running}}
	I0103 20:37:26.058968  548415 cli_runner.go:164] Run: docker container inspect force-systemd-flag-518436 --format={{.State.Status}}
	I0103 20:37:26.090918  548415 cli_runner.go:164] Run: docker exec force-systemd-flag-518436 stat /var/lib/dpkg/alternatives/iptables
	I0103 20:37:26.148665  548415 oci.go:144] the created container "force-systemd-flag-518436" has a running status.
	I0103 20:37:26.148701  548415 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/force-systemd-flag-518436/id_rsa...
	I0103 20:37:26.663711  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/force-systemd-flag-518436/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0103 20:37:26.663761  548415 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17885-409390/.minikube/machines/force-systemd-flag-518436/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0103 20:37:26.692559  548415 cli_runner.go:164] Run: docker container inspect force-systemd-flag-518436 --format={{.State.Status}}
	I0103 20:37:26.722268  548415 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0103 20:37:26.722294  548415 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-518436 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0103 20:37:26.820656  548415 cli_runner.go:164] Run: docker container inspect force-systemd-flag-518436 --format={{.State.Status}}
	I0103 20:37:26.850737  548415 machine.go:88] provisioning docker machine ...
	I0103 20:37:26.850770  548415 ubuntu.go:169] provisioning hostname "force-systemd-flag-518436"
	I0103 20:37:26.850833  548415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-518436
	I0103 20:37:26.878654  548415 main.go:141] libmachine: Using SSH client type: native
	I0103 20:37:26.879103  548415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33299 <nil> <nil>}
	I0103 20:37:26.879124  548415 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-518436 && echo "force-systemd-flag-518436" | sudo tee /etc/hostname
	I0103 20:37:27.110950  548415 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-518436
	
	I0103 20:37:27.111046  548415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-518436
	I0103 20:37:27.140534  548415 main.go:141] libmachine: Using SSH client type: native
	I0103 20:37:27.140980  548415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33299 <nil> <nil>}
	I0103 20:37:27.141015  548415 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-518436' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-518436/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-518436' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:37:27.300774  548415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:37:27.300844  548415 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17885-409390/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-409390/.minikube}
	I0103 20:37:27.300876  548415 ubuntu.go:177] setting up certificates
	I0103 20:37:27.300897  548415 provision.go:83] configureAuth start
	I0103 20:37:27.300992  548415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-518436
	I0103 20:37:27.326572  548415 provision.go:138] copyHostCerts
	I0103 20:37:27.326611  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem
	I0103 20:37:27.326641  548415 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem, removing ...
	I0103 20:37:27.326647  548415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem
	I0103 20:37:27.326721  548415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem (1679 bytes)
	I0103 20:37:27.326801  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem
	I0103 20:37:27.326817  548415 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem, removing ...
	I0103 20:37:27.326821  548415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem
	I0103 20:37:27.326856  548415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem (1078 bytes)
	I0103 20:37:27.326896  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem
	I0103 20:37:27.326912  548415 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem, removing ...
	I0103 20:37:27.326916  548415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem
	I0103 20:37:27.326939  548415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem (1123 bytes)
	I0103 20:37:27.326980  548415 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-518436 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube force-systemd-flag-518436]
	I0103 20:37:27.868454  548415 provision.go:172] copyRemoteCerts
	I0103 20:37:27.868525  548415 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:37:27.868578  548415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-518436
	I0103 20:37:27.886912  548415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33299 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/force-systemd-flag-518436/id_rsa Username:docker}
	I0103 20:37:27.990394  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0103 20:37:27.990456  548415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:37:28.024646  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0103 20:37:28.024708  548415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0103 20:37:28.055214  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0103 20:37:28.055280  548415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 20:37:28.089612  548415 provision.go:86] duration metric: configureAuth took 788.66504ms
	I0103 20:37:28.089641  548415 ubuntu.go:193] setting minikube options for container-runtime
	I0103 20:37:28.089830  548415 config.go:182] Loaded profile config "force-systemd-flag-518436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:37:28.089944  548415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-518436
	I0103 20:37:28.113076  548415 main.go:141] libmachine: Using SSH client type: native
	I0103 20:37:28.114022  548415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33299 <nil> <nil>}
	I0103 20:37:28.114059  548415 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:37:28.373062  548415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:37:28.373090  548415 machine.go:91] provisioned docker machine in 1.522329945s
	I0103 20:37:28.373100  548415 client.go:171] LocalClient.Create took 8.043204665s
	I0103 20:37:28.373114  548415 start.go:167] duration metric: libmachine.API.Create for "force-systemd-flag-518436" took 8.043278871s
	I0103 20:37:28.373121  548415 start.go:300] post-start starting for "force-systemd-flag-518436" (driver="docker")
	I0103 20:37:28.373131  548415 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:37:28.373198  548415 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:37:28.373255  548415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-518436
	I0103 20:37:28.392368  548415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33299 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/force-systemd-flag-518436/id_rsa Username:docker}
	I0103 20:37:28.494119  548415 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:37:28.498455  548415 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0103 20:37:28.498491  548415 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0103 20:37:28.498509  548415 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0103 20:37:28.498543  548415 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0103 20:37:28.498555  548415 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-409390/.minikube/addons for local assets ...
	I0103 20:37:28.498609  548415 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-409390/.minikube/files for local assets ...
	I0103 20:37:28.498707  548415 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem -> 4147632.pem in /etc/ssl/certs
	I0103 20:37:28.498723  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem -> /etc/ssl/certs/4147632.pem
	I0103 20:37:28.498838  548415 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:37:28.510403  548415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem --> /etc/ssl/certs/4147632.pem (1708 bytes)
	I0103 20:37:28.542730  548415 start.go:303] post-start completed in 169.593982ms
	I0103 20:37:28.543114  548415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-518436
	I0103 20:37:28.560707  548415 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/config.json ...
	I0103 20:37:28.561000  548415 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 20:37:28.561056  548415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-518436
	I0103 20:37:28.581425  548415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33299 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/force-systemd-flag-518436/id_rsa Username:docker}
	I0103 20:37:28.676809  548415 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0103 20:37:28.682820  548415 start.go:128] duration metric: createHost completed in 8.357334537s
	I0103 20:37:28.682847  548415 start.go:83] releasing machines lock for "force-systemd-flag-518436", held for 8.3575692s
	I0103 20:37:28.682921  548415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-518436
	I0103 20:37:28.700645  548415 ssh_runner.go:195] Run: cat /version.json
	I0103 20:37:28.700698  548415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-518436
	I0103 20:37:28.700733  548415 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:37:28.700821  548415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-518436
	I0103 20:37:28.719973  548415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33299 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/force-systemd-flag-518436/id_rsa Username:docker}
	I0103 20:37:28.720224  548415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33299 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/force-systemd-flag-518436/id_rsa Username:docker}
	I0103 20:37:28.950630  548415 ssh_runner.go:195] Run: systemctl --version
	I0103 20:37:28.956746  548415 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:37:29.106946  548415 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 20:37:29.113651  548415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:37:29.144821  548415 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0103 20:37:29.144909  548415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:37:29.197464  548415 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0103 20:37:29.197533  548415 start.go:475] detecting cgroup driver to use...
	I0103 20:37:29.197558  548415 start.go:479] using "systemd" cgroup driver as enforced via flags
	I0103 20:37:29.197658  548415 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:37:29.220050  548415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:37:29.235516  548415 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:37:29.235588  548415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:37:29.252403  548415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:37:29.269479  548415 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:37:29.389727  548415 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:37:29.547320  548415 docker.go:219] disabling docker service ...
	I0103 20:37:29.547402  548415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:37:29.574324  548415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:37:29.590224  548415 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:37:29.722753  548415 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:37:29.830084  548415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:37:29.851009  548415 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:37:29.875964  548415 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 20:37:29.876039  548415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:37:29.889720  548415 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0103 20:37:29.889798  548415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:37:29.903266  548415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:37:29.917487  548415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:37:29.932813  548415 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:37:29.950124  548415 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:37:29.974440  548415 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:37:29.997596  548415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:37:30.174440  548415 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:37:30.360802  548415 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:37:30.360888  548415 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:37:30.372146  548415 start.go:543] Will wait 60s for crictl version
	I0103 20:37:30.372228  548415 ssh_runner.go:195] Run: which crictl
	I0103 20:37:30.376905  548415 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:37:30.438682  548415 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0103 20:37:30.438773  548415 ssh_runner.go:195] Run: crio --version
	I0103 20:37:30.497115  548415 ssh_runner.go:195] Run: crio --version
	I0103 20:37:30.562730  548415 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	
	
	==> CRI-O <==
	Jan 03 20:37:17 pause-589189 crio[2426]: time="2024-01-03 20:37:17.777530582Z" level=info msg="Creating container: kube-system/coredns-5dd5756b68-q766p/coredns" id=54fc4dce-ef9b-48da-a8a3-93ce997d0bd0 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 03 20:37:17 pause-589189 crio[2426]: time="2024-01-03 20:37:17.778056326Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 03 20:37:17 pause-589189 crio[2426]: time="2024-01-03 20:37:17.850856118Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/985dcb3e125c03e9f511fa98dfd4d8f16b2bc3b8a7017e5919c225fa358ba130/merged/etc/passwd: no such file or directory"
	Jan 03 20:37:17 pause-589189 crio[2426]: time="2024-01-03 20:37:17.850913208Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/985dcb3e125c03e9f511fa98dfd4d8f16b2bc3b8a7017e5919c225fa358ba130/merged/etc/group: no such file or directory"
	Jan 03 20:37:17 pause-589189 crio[2426]: time="2024-01-03 20:37:17.994216456Z" level=info msg="Created container 30eedc6a46c84a94e2c3c67e847250b4c693a24f38b24dc4c8d2ffe89907eb9c: kube-system/kindnet-xh476/kindnet-cni" id=1542492e-ad2e-41bd-b312-3446edb1f31a name=/runtime.v1.RuntimeService/CreateContainer
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:17.996298248Z" level=info msg="Starting container: 30eedc6a46c84a94e2c3c67e847250b4c693a24f38b24dc4c8d2ffe89907eb9c" id=5fe5b598-d2b8-46b6-aa09-91c57e5f8a8b name=/runtime.v1.RuntimeService/StartContainer
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.017269297Z" level=info msg="Started container" PID=3130 containerID=30eedc6a46c84a94e2c3c67e847250b4c693a24f38b24dc4c8d2ffe89907eb9c description=kube-system/kindnet-xh476/kindnet-cni id=5fe5b598-d2b8-46b6-aa09-91c57e5f8a8b name=/runtime.v1.RuntimeService/StartContainer sandboxID=14ee1f0f3f9368277ea49cbedd43096b3138b778b533d2940e65d4dde11de2f1
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.074100874Z" level=info msg="Created container d96acc06cecf50de1213dd8147d204b73f194bb3e86ac56b91e2b6ac29fe6827: kube-system/coredns-5dd5756b68-q766p/coredns" id=54fc4dce-ef9b-48da-a8a3-93ce997d0bd0 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.077178787Z" level=info msg="Starting container: d96acc06cecf50de1213dd8147d204b73f194bb3e86ac56b91e2b6ac29fe6827" id=15891f20-8630-42db-a50d-fcace8abf2d5 name=/runtime.v1.RuntimeService/StartContainer
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.109227857Z" level=info msg="Started container" PID=3159 containerID=d96acc06cecf50de1213dd8147d204b73f194bb3e86ac56b91e2b6ac29fe6827 description=kube-system/coredns-5dd5756b68-q766p/coredns id=15891f20-8630-42db-a50d-fcace8abf2d5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=959adb8df96711128ac2f95719313a3663564c9dde30a4e9e60a006ae7c618d5
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.128864058Z" level=info msg="Created container ebd980d6d02c4d4ad41835e2d47eb8c29c0cfbfe84c3712dc680224e5bd93e89: kube-system/kube-proxy-qptr2/kube-proxy" id=188eba73-0716-4235-a32f-46b0b051febf name=/runtime.v1.RuntimeService/CreateContainer
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.130809080Z" level=info msg="Starting container: ebd980d6d02c4d4ad41835e2d47eb8c29c0cfbfe84c3712dc680224e5bd93e89" id=a141357d-a1e3-4e15-b296-4a30bd3c1cc0 name=/runtime.v1.RuntimeService/StartContainer
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.228076636Z" level=info msg="Started container" PID=3165 containerID=ebd980d6d02c4d4ad41835e2d47eb8c29c0cfbfe84c3712dc680224e5bd93e89 description=kube-system/kube-proxy-qptr2/kube-proxy id=a141357d-a1e3-4e15-b296-4a30bd3c1cc0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2502534472b45cfc8467d7909c5fc2b7ea2fda234d7ab31cbad29a69a74d9e1d
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.494639604Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.508512637Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.508549675Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.508567176Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.527729595Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.527767132Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.527784765Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.540670457Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.540715896Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.540749216Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.548352612Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.548395869Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ebd980d6d02c4       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39   13 seconds ago      Running             kube-proxy                2                   2502534472b45       kube-proxy-qptr2
	d96acc06cecf5       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   13 seconds ago      Running             coredns                   2                   959adb8df9671       coredns-5dd5756b68-q766p
	30eedc6a46c84       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   14 seconds ago      Running             kindnet-cni               2                   14ee1f0f3f936       kindnet-xh476
	cbeda173effd8       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b   23 seconds ago      Running             kube-controller-manager   2                   70f8e9f178384       kube-controller-manager-pause-589189
	8aa0a9ca18033       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54   23 seconds ago      Running             kube-scheduler            2                   8591909c742a1       kube-scheduler-pause-589189
	35e5b55bdfab7       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   23 seconds ago      Running             etcd                      2                   f2b5abd3abda2       etcd-pause-589189
	90f6983570911       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419   23 seconds ago      Running             kube-apiserver            2                   b8e42777c245a       kube-apiserver-pause-589189
	0e822a87ca22e       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   37 seconds ago      Exited              etcd                      1                   f2b5abd3abda2       etcd-pause-589189
	fe332dccccd3c       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419   40 seconds ago      Exited              kube-apiserver            1                   b8e42777c245a       kube-apiserver-pause-589189
	d22ad169c567e       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   40 seconds ago      Exited              coredns                   1                   959adb8df9671       coredns-5dd5756b68-q766p
	85eb04634a23f       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39   40 seconds ago      Exited              kube-proxy                1                   2502534472b45       kube-proxy-qptr2
	78f150ce8a96b       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   40 seconds ago      Exited              kindnet-cni               1                   14ee1f0f3f936       kindnet-xh476
	48229eac57ee1       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b   40 seconds ago      Exited              kube-controller-manager   1                   70f8e9f178384       kube-controller-manager-pause-589189
	0b4de2b22c7b4       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54   40 seconds ago      Exited              kube-scheduler            1                   8591909c742a1       kube-scheduler-pause-589189
	
	
	==> coredns [d22ad169c567e2011ce0d9196523e4a33db859f3ebadfc4f50c24d4372357ae0] <==
	
	
	==> coredns [d96acc06cecf50de1213dd8147d204b73f194bb3e86ac56b91e2b6ac29fe6827] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45345 - 9020 "HINFO IN 4866547591508166232.2971136184924569604. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029445235s
	
	
	==> describe nodes <==
	Name:               pause-589189
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-589189
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=pause-589189
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_03T20_35_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 20:35:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-589189
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jan 2024 20:37:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 20:37:17 +0000   Wed, 03 Jan 2024 20:35:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 20:37:17 +0000   Wed, 03 Jan 2024 20:35:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 20:37:17 +0000   Wed, 03 Jan 2024 20:35:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 20:37:17 +0000   Wed, 03 Jan 2024 20:36:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-589189
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 209f005ad7c74580832a102266efa806
	  System UUID:                77b79695-f93f-4462-92b1-48d0a71f7d17
	  Boot ID:                    75f8dc93-969c-4083-a399-3fa01ac68612
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-q766p                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     84s
	  kube-system                 etcd-pause-589189                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         98s
	  kube-system                 kindnet-xh476                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      84s
	  kube-system                 kube-apiserver-pause-589189             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-controller-manager-pause-589189    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-proxy-qptr2                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-scheduler-pause-589189             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 82s                  kube-proxy       
	  Normal  Starting                 13s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  106s (x8 over 106s)  kubelet          Node pause-589189 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s (x8 over 106s)  kubelet          Node pause-589189 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s (x8 over 106s)  kubelet          Node pause-589189 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     97s                  kubelet          Node pause-589189 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  97s                  kubelet          Node pause-589189 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s                  kubelet          Node pause-589189 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 97s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           85s                  node-controller  Node pause-589189 event: Registered Node pause-589189 in Controller
	  Normal  NodeReady                52s                  kubelet          Node pause-589189 status is now: NodeReady
	  Normal  Starting                 25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)    kubelet          Node pause-589189 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)    kubelet          Node pause-589189 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x8 over 25s)    kubelet          Node pause-589189 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                   node-controller  Node pause-589189 event: Registered Node pause-589189 in Controller
	
	
	==> dmesg <==
	[  +0.001189] FS-Cache: O-key=[8] 'ccd1c90000000000'
	[  +0.000818] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.001059] FS-Cache: N-cookie d=000000008497ac2d{9p.inode} n=00000000a750ea4f
	[  +0.001301] FS-Cache: N-key=[8] 'ccd1c90000000000'
	[  +0.014646] FS-Cache: Duplicate cookie detected
	[  +0.000925] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001115] FS-Cache: O-cookie d=000000008497ac2d{9p.inode} n=00000000f7d3da5e
	[  +0.001218] FS-Cache: O-key=[8] 'ccd1c90000000000'
	[  +0.000824] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.001156] FS-Cache: N-cookie d=000000008497ac2d{9p.inode} n=00000000bc524ce4
	[  +0.001241] FS-Cache: N-key=[8] 'ccd1c90000000000'
	[  +2.760106] FS-Cache: Duplicate cookie detected
	[  +0.000784] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001116] FS-Cache: O-cookie d=000000008497ac2d{9p.inode} n=00000000ca9fc0f7
	[  +0.001225] FS-Cache: O-key=[8] 'cbd1c90000000000'
	[  +0.000783] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001000] FS-Cache: N-cookie d=000000008497ac2d{9p.inode} n=000000003725d1cd
	[  +0.001192] FS-Cache: N-key=[8] 'cbd1c90000000000'
	[  +0.402621] FS-Cache: Duplicate cookie detected
	[  +0.000828] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001155] FS-Cache: O-cookie d=000000008497ac2d{9p.inode} n=00000000458cff56
	[  +0.001202] FS-Cache: O-key=[8] 'd1d1c90000000000'
	[  +0.000836] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001046] FS-Cache: N-cookie d=000000008497ac2d{9p.inode} n=00000000263e5b2a
	[  +0.001184] FS-Cache: N-key=[8] 'd1d1c90000000000'
	
	
	==> etcd [0e822a87ca22ecbd73e32a4bb31f706833c88c097de26c88b332a4865097c93f] <==
	{"level":"info","ts":"2024-01-03T20:36:54.861716Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-01-03T20:36:56.648071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-03T20:36:56.648119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-03T20:36:56.648154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-01-03T20:36:56.648168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2024-01-03T20:36:56.648175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-01-03T20:36:56.648185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2024-01-03T20:36:56.648192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-01-03T20:36:56.653071Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T20:36:56.653232Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T20:36:56.654217Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-01-03T20:36:56.654235Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-03T20:36:56.653075Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-589189 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-03T20:36:56.654595Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-03T20:36:56.654613Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-03T20:37:03.264134Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-01-03T20:37:03.26421Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-589189","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"warn","ts":"2024-01-03T20:37:03.264299Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-03T20:37:03.264324Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-03T20:37:03.265023Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-03T20:37:03.265052Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-01-03T20:37:03.265175Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2024-01-03T20:37:03.268595Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-01-03T20:37:03.26873Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-01-03T20:37:03.268741Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-589189","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [35e5b55bdfab72b4d4590efb9cc4298d7ec87ffe781d82e27efac9ad6779c9c9] <==
	{"level":"info","ts":"2024-01-03T20:37:08.919872Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2024-01-03T20:37:08.919959Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T20:37:08.919985Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T20:37:08.921833Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-03T20:37:08.921883Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-03T20:37:08.921892Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-03T20:37:08.984293Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-03T20:37:08.984533Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-03T20:37:08.984571Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-03T20:37:08.984671Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-01-03T20:37:08.98468Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-01-03T20:37:10.562891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2024-01-03T20:37:10.563024Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-01-03T20:37:10.563076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-01-03T20:37:10.563131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2024-01-03T20:37:10.563172Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2024-01-03T20:37:10.563213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2024-01-03T20:37:10.563248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2024-01-03T20:37:10.587519Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-589189 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-03T20:37:10.587756Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T20:37:10.588793Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-01-03T20:37:10.588907Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T20:37:10.589753Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-03T20:37:10.589863Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-03T20:37:10.589917Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:37:32 up  2:20,  0 users,  load average: 4.61, 2.98, 2.30
	Linux pause-589189 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [30eedc6a46c84a94e2c3c67e847250b4c693a24f38b24dc4c8d2ffe89907eb9c] <==
	I0103 20:37:18.160597       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0103 20:37:18.160918       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0103 20:37:18.161236       1 main.go:116] setting mtu 1500 for CNI 
	I0103 20:37:18.161303       1 main.go:146] kindnetd IP family: "ipv4"
	I0103 20:37:18.161361       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0103 20:37:18.494391       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0103 20:37:18.494432       1 main.go:227] handling current node
	I0103 20:37:28.512853       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0103 20:37:28.512975       1 main.go:227] handling current node
	
	
	==> kindnet [78f150ce8a96bcfa7c3d36b4aab3838e2634c219621756b6d4ce3911e540b5a8] <==
	
	
	==> kube-apiserver [90f6983570911e6942b79abe8d42b3e6b272a1e14ccc3350a0f57594e7112913] <==
	I0103 20:37:16.605057       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0103 20:37:16.605065       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0103 20:37:16.605072       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0103 20:37:16.605080       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0103 20:37:16.634454       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0103 20:37:16.927717       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0103 20:37:16.931522       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0103 20:37:16.932261       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0103 20:37:16.932333       1 shared_informer.go:318] Caches are synced for configmaps
	I0103 20:37:16.934622       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0103 20:37:16.935497       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0103 20:37:16.935601       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0103 20:37:16.952485       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0103 20:37:16.961776       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0103 20:37:16.963892       1 aggregator.go:166] initial CRD sync complete...
	I0103 20:37:16.964008       1 autoregister_controller.go:141] Starting autoregister controller
	I0103 20:37:16.964045       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0103 20:37:16.964117       1 cache.go:39] Caches are synced for autoregister controller
	E0103 20:37:17.021983       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0103 20:37:17.408837       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0103 20:37:19.924072       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0103 20:37:20.076961       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0103 20:37:20.097865       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0103 20:37:20.179540       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0103 20:37:20.188593       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [fe332dccccd3cfdf9eb22fda618573d4ce50f4b9a7d78741ca394f0248e4a1bb] <==
	
	
	==> kube-controller-manager [48229eac57ee1c0f3d9f6bc28113a1e66b5ba7e3490f1f786830c924316737c5] <==
	
	
	==> kube-controller-manager [cbeda173effd84e4da29608075847c35a44e01ec35994c85262d3c83288eb00b] <==
	I0103 20:37:29.583526       1 shared_informer.go:318] Caches are synced for service account
	I0103 20:37:29.583609       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0103 20:37:29.583665       1 taint_manager.go:210] "Sending events to api server"
	I0103 20:37:29.584380       1 event.go:307] "Event occurred" object="pause-589189" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-589189 event: Registered Node pause-589189 in Controller"
	I0103 20:37:29.588547       1 shared_informer.go:318] Caches are synced for persistent volume
	I0103 20:37:29.592150       1 shared_informer.go:318] Caches are synced for PV protection
	I0103 20:37:29.593351       1 shared_informer.go:318] Caches are synced for daemon sets
	I0103 20:37:29.594649       1 shared_informer.go:318] Caches are synced for endpoint
	I0103 20:37:29.596280       1 shared_informer.go:318] Caches are synced for GC
	I0103 20:37:29.598950       1 shared_informer.go:318] Caches are synced for deployment
	I0103 20:37:29.602274       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0103 20:37:29.602490       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.66µs"
	I0103 20:37:29.608352       1 shared_informer.go:318] Caches are synced for expand
	I0103 20:37:29.608423       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0103 20:37:29.618107       1 shared_informer.go:318] Caches are synced for PVC protection
	I0103 20:37:29.629739       1 shared_informer.go:318] Caches are synced for HPA
	I0103 20:37:29.632699       1 shared_informer.go:318] Caches are synced for stateful set
	I0103 20:37:29.673230       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0103 20:37:29.676410       1 shared_informer.go:318] Caches are synced for resource quota
	I0103 20:37:29.676528       1 shared_informer.go:318] Caches are synced for job
	I0103 20:37:29.743878       1 shared_informer.go:318] Caches are synced for resource quota
	I0103 20:37:29.763945       1 shared_informer.go:318] Caches are synced for cronjob
	I0103 20:37:30.090558       1 shared_informer.go:318] Caches are synced for garbage collector
	I0103 20:37:30.090682       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0103 20:37:30.154392       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-proxy [85eb04634a23fe2f0cabe61dd973cbc0b3efaf531040055e52ee92f023d47bfe] <==
	
	
	==> kube-proxy [ebd980d6d02c4d4ad41835e2d47eb8c29c0cfbfe84c3712dc680224e5bd93e89] <==
	I0103 20:37:18.400422       1 server_others.go:69] "Using iptables proxy"
	I0103 20:37:18.424742       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0103 20:37:18.548458       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0103 20:37:18.551484       1 server_others.go:152] "Using iptables Proxier"
	I0103 20:37:18.551625       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0103 20:37:18.551663       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0103 20:37:18.551770       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0103 20:37:18.552056       1 server.go:846] "Version info" version="v1.28.4"
	I0103 20:37:18.552325       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 20:37:18.553266       1 config.go:188] "Starting service config controller"
	I0103 20:37:18.553556       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0103 20:37:18.553629       1 config.go:97] "Starting endpoint slice config controller"
	I0103 20:37:18.553669       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0103 20:37:18.554403       1 config.go:315] "Starting node config controller"
	I0103 20:37:18.554474       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0103 20:37:18.655671       1 shared_informer.go:318] Caches are synced for node config
	I0103 20:37:18.655768       1 shared_informer.go:318] Caches are synced for service config
	I0103 20:37:18.655795       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0b4de2b22c7b4724729f6e7fb8c0e390ff80431329c90ae0cc8d0a069bcdfb8d] <==
	
	
	==> kube-scheduler [8aa0a9ca18033bc8d38f5704405cf21ef9273e00c0d4de80c1e533396758ab3f] <==
	I0103 20:37:13.918951       1 serving.go:348] Generated self-signed cert in-memory
	W0103 20:37:16.780405       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0103 20:37:16.780522       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0103 20:37:16.780558       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0103 20:37:16.780616       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0103 20:37:16.901396       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0103 20:37:16.901515       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 20:37:16.908686       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0103 20:37:16.908813       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0103 20:37:16.911538       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0103 20:37:16.911674       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0103 20:37:17.012955       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 03 20:37:08 pause-589189 kubelet[2903]: E0103 20:37:08.599027    2903 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Jan 03 20:37:08 pause-589189 kubelet[2903]: W0103 20:37:08.690506    2903 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Jan 03 20:37:08 pause-589189 kubelet[2903]: E0103 20:37:08.690833    2903 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Jan 03 20:37:08 pause-589189 kubelet[2903]: W0103 20:37:08.823229    2903 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Jan 03 20:37:08 pause-589189 kubelet[2903]: E0103 20:37:08.823290    2903 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Jan 03 20:37:08 pause-589189 kubelet[2903]: W0103 20:37:08.831957    2903 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-589189&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Jan 03 20:37:08 pause-589189 kubelet[2903]: E0103 20:37:08.832022    2903 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-589189&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Jan 03 20:37:09 pause-589189 kubelet[2903]: I0103 20:37:09.041337    2903 kubelet_node_status.go:70] "Attempting to register node" node="pause-589189"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.000178    2903 kubelet_node_status.go:108] "Node was previously registered" node="pause-589189"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.000282    2903 kubelet_node_status.go:73] "Successfully registered node" node="pause-589189"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.002707    2903 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.003590    2903 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.460753    2903 apiserver.go:52] "Watching apiserver"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.469084    2903 topology_manager.go:215] "Topology Admit Handler" podUID="16420a9b-d68e-4a16-84d7-e6344f3b9f27" podNamespace="kube-system" podName="kindnet-xh476"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.469223    2903 topology_manager.go:215] "Topology Admit Handler" podUID="a55774e9-f310-4e29-be2d-81f71022a59b" podNamespace="kube-system" podName="kube-proxy-qptr2"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.469274    2903 topology_manager.go:215] "Topology Admit Handler" podUID="88099227-8f36-44d9-b01c-1d8a5fca054a" podNamespace="kube-system" podName="coredns-5dd5756b68-q766p"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.513351    2903 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.610004    2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16420a9b-d68e-4a16-84d7-e6344f3b9f27-lib-modules\") pod \"kindnet-xh476\" (UID: \"16420a9b-d68e-4a16-84d7-e6344f3b9f27\") " pod="kube-system/kindnet-xh476"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.610078    2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a55774e9-f310-4e29-be2d-81f71022a59b-xtables-lock\") pod \"kube-proxy-qptr2\" (UID: \"a55774e9-f310-4e29-be2d-81f71022a59b\") " pod="kube-system/kube-proxy-qptr2"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.610115    2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/16420a9b-d68e-4a16-84d7-e6344f3b9f27-cni-cfg\") pod \"kindnet-xh476\" (UID: \"16420a9b-d68e-4a16-84d7-e6344f3b9f27\") " pod="kube-system/kindnet-xh476"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.610150    2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16420a9b-d68e-4a16-84d7-e6344f3b9f27-xtables-lock\") pod \"kindnet-xh476\" (UID: \"16420a9b-d68e-4a16-84d7-e6344f3b9f27\") " pod="kube-system/kindnet-xh476"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.610178    2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a55774e9-f310-4e29-be2d-81f71022a59b-lib-modules\") pod \"kube-proxy-qptr2\" (UID: \"a55774e9-f310-4e29-be2d-81f71022a59b\") " pod="kube-system/kube-proxy-qptr2"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.770653    2903 scope.go:117] "RemoveContainer" containerID="d22ad169c567e2011ce0d9196523e4a33db859f3ebadfc4f50c24d4372357ae0"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.771776    2903 scope.go:117] "RemoveContainer" containerID="78f150ce8a96bcfa7c3d36b4aab3838e2634c219621756b6d4ce3911e540b5a8"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.772173    2903 scope.go:117] "RemoveContainer" containerID="85eb04634a23fe2f0cabe61dd973cbc0b3efaf531040055e52ee92f023d47bfe"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-589189 -n pause-589189
helpers_test.go:261: (dbg) Run:  kubectl --context pause-589189 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-589189
helpers_test.go:235: (dbg) docker inspect pause-589189:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d8364cccb83a7e386a281d628dd5e921713af6e4882f7ae659b3284a29be602f",
	        "Created": "2024-01-03T20:35:27.244875508Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 540562,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-03T20:35:27.593709617Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3bfff26a1ae256fcdf8f10a333efdefbe26edc5c1669e1cc5c973c016e44d3c4",
	        "ResolvConfPath": "/var/lib/docker/containers/d8364cccb83a7e386a281d628dd5e921713af6e4882f7ae659b3284a29be602f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d8364cccb83a7e386a281d628dd5e921713af6e4882f7ae659b3284a29be602f/hostname",
	        "HostsPath": "/var/lib/docker/containers/d8364cccb83a7e386a281d628dd5e921713af6e4882f7ae659b3284a29be602f/hosts",
	        "LogPath": "/var/lib/docker/containers/d8364cccb83a7e386a281d628dd5e921713af6e4882f7ae659b3284a29be602f/d8364cccb83a7e386a281d628dd5e921713af6e4882f7ae659b3284a29be602f-json.log",
	        "Name": "/pause-589189",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-589189:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-589189",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a980cbfa53518a5d5bdf9f83e740341d898097028da0944279fcce8a6e0e69b9-init/diff:/var/lib/docker/overlay2/0cefd74c13c0ff527608d5d1778b7a3893c62167f91a1554bd1fa9cb8110135e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a980cbfa53518a5d5bdf9f83e740341d898097028da0944279fcce8a6e0e69b9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a980cbfa53518a5d5bdf9f83e740341d898097028da0944279fcce8a6e0e69b9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a980cbfa53518a5d5bdf9f83e740341d898097028da0944279fcce8a6e0e69b9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-589189",
	                "Source": "/var/lib/docker/volumes/pause-589189/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-589189",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-589189",
	                "name.minikube.sigs.k8s.io": "pause-589189",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3f8cc0c4e993404d76afcec9e8af7aa7dcb02c3e0d34f9748fdb8ab61e011267",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33294"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33293"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33290"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33292"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33291"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3f8cc0c4e993",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-589189": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d8364cccb83a",
	                        "pause-589189"
	                    ],
	                    "NetworkID": "27ca36c6af97555a43f6834e15eb68ffbf6196ad012d41d49626e0d2a307976b",
	                    "EndpointID": "7cf6919bb821dcbf697982305521c9297658da84417488f888d3fe66b331e119",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-589189 -n pause-589189
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-589189 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-589189 logs -n 25: (2.251010932s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-301144            | NoKubernetes-301144       | jenkins | v1.32.0 | 03 Jan 24 20:29 UTC |                     |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20         |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-301144            | NoKubernetes-301144       | jenkins | v1.32.0 | 03 Jan 24 20:29 UTC | 03 Jan 24 20:29 UTC |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-301144            | NoKubernetes-301144       | jenkins | v1.32.0 | 03 Jan 24 20:29 UTC | 03 Jan 24 20:29 UTC |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-301144            | NoKubernetes-301144       | jenkins | v1.32.0 | 03 Jan 24 20:29 UTC | 03 Jan 24 20:29 UTC |
	| start   | -p NoKubernetes-301144            | NoKubernetes-301144       | jenkins | v1.32.0 | 03 Jan 24 20:29 UTC | 03 Jan 24 20:30 UTC |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-301144 sudo       | NoKubernetes-301144       | jenkins | v1.32.0 | 03 Jan 24 20:30 UTC |                     |
	|         | systemctl is-active --quiet       |                           |         |         |                     |                     |
	|         | service kubelet                   |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-301144            | NoKubernetes-301144       | jenkins | v1.32.0 | 03 Jan 24 20:30 UTC | 03 Jan 24 20:30 UTC |
	| start   | -p NoKubernetes-301144            | NoKubernetes-301144       | jenkins | v1.32.0 | 03 Jan 24 20:30 UTC | 03 Jan 24 20:30 UTC |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-301144 sudo       | NoKubernetes-301144       | jenkins | v1.32.0 | 03 Jan 24 20:30 UTC |                     |
	|         | systemctl is-active --quiet       |                           |         |         |                     |                     |
	|         | service kubelet                   |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-301144            | NoKubernetes-301144       | jenkins | v1.32.0 | 03 Jan 24 20:30 UTC | 03 Jan 24 20:30 UTC |
	| start   | -p kubernetes-upgrade-753304      | kubernetes-upgrade-753304 | jenkins | v1.32.0 | 03 Jan 24 20:30 UTC | 03 Jan 24 20:31 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-753304      | kubernetes-upgrade-753304 | jenkins | v1.32.0 | 03 Jan 24 20:31 UTC | 03 Jan 24 20:31 UTC |
	| start   | -p kubernetes-upgrade-753304      | kubernetes-upgrade-753304 | jenkins | v1.32.0 | 03 Jan 24 20:31 UTC | 03 Jan 24 20:36 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-108038         | missing-upgrade-108038    | jenkins | v1.32.0 | 03 Jan 24 20:31 UTC |                     |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p missing-upgrade-108038         | missing-upgrade-108038    | jenkins | v1.32.0 | 03 Jan 24 20:32 UTC | 03 Jan 24 20:32 UTC |
	| start   | -p stopped-upgrade-077088         | stopped-upgrade-077088    | jenkins | v1.32.0 | 03 Jan 24 20:33 UTC |                     |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-077088         | stopped-upgrade-077088    | jenkins | v1.32.0 | 03 Jan 24 20:33 UTC | 03 Jan 24 20:33 UTC |
	| start   | -p running-upgrade-251987         | running-upgrade-251987    | jenkins | v1.32.0 | 03 Jan 24 20:35 UTC |                     |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-251987         | running-upgrade-251987    | jenkins | v1.32.0 | 03 Jan 24 20:35 UTC | 03 Jan 24 20:35 UTC |
	| start   | -p pause-589189 --memory=2048     | pause-589189              | jenkins | v1.32.0 | 03 Jan 24 20:35 UTC | 03 Jan 24 20:36 UTC |
	|         | --install-addons=false            |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker        |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-753304      | kubernetes-upgrade-753304 | jenkins | v1.32.0 | 03 Jan 24 20:36 UTC |                     |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-753304      | kubernetes-upgrade-753304 | jenkins | v1.32.0 | 03 Jan 24 20:36 UTC | 03 Jan 24 20:37 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p pause-589189                   | pause-589189              | jenkins | v1.32.0 | 03 Jan 24 20:36 UTC | 03 Jan 24 20:37 UTC |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-753304      | kubernetes-upgrade-753304 | jenkins | v1.32.0 | 03 Jan 24 20:37 UTC | 03 Jan 24 20:37 UTC |
	| start   | -p force-systemd-flag-518436      | force-systemd-flag-518436 | jenkins | v1.32.0 | 03 Jan 24 20:37 UTC |                     |
	|         | --memory=2048 --force-systemd     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 20:37:20
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 20:37:20.002609  548415 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:37:20.002854  548415 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:37:20.002882  548415 out.go:309] Setting ErrFile to fd 2...
	I0103 20:37:20.002913  548415 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:37:20.003214  548415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-409390/.minikube/bin
	I0103 20:37:20.003747  548415 out.go:303] Setting JSON to false
	I0103 20:37:20.004913  548415 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":8389,"bootTime":1704305851,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0103 20:37:20.005043  548415 start.go:138] virtualization:  
	I0103 20:37:20.009254  548415 out.go:177] * [force-systemd-flag-518436] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0103 20:37:20.014413  548415 notify.go:220] Checking for updates...
	I0103 20:37:20.014370  548415 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 20:37:20.018065  548415 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 20:37:20.020242  548415 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 20:37:20.022466  548415 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube
	I0103 20:37:20.024364  548415 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0103 20:37:20.026332  548415 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 20:37:20.028865  548415 config.go:182] Loaded profile config "pause-589189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:37:20.029066  548415 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 20:37:20.057661  548415 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 20:37:20.057784  548415 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 20:37:20.207998  548415 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-03 20:37:20.196797244 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 20:37:20.208096  548415 docker.go:295] overlay module found
	I0103 20:37:20.210398  548415 out.go:177] * Using the docker driver based on user configuration
	I0103 20:37:20.212312  548415 start.go:298] selected driver: docker
	I0103 20:37:20.212346  548415 start.go:902] validating driver "docker" against <nil>
	I0103 20:37:20.212360  548415 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 20:37:20.213097  548415 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 20:37:20.294262  548415 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-03 20:37:20.284881646 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 20:37:20.294427  548415 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0103 20:37:20.294711  548415 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0103 20:37:20.296468  548415 out.go:177] * Using Docker driver with root privileges
	I0103 20:37:20.298323  548415 cni.go:84] Creating CNI manager for ""
	I0103 20:37:20.298346  548415 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 20:37:20.298357  548415 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0103 20:37:20.298376  548415 start_flags.go:323] config:
	{Name:force-systemd-flag-518436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-518436 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:37:20.300631  548415 out.go:177] * Starting control plane node force-systemd-flag-518436 in cluster force-systemd-flag-518436
	I0103 20:37:20.302590  548415 cache.go:121] Beginning downloading kic base image for docker with crio
	I0103 20:37:20.304549  548415 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I0103 20:37:20.306729  548415 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:37:20.306780  548415 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0103 20:37:20.306801  548415 cache.go:56] Caching tarball of preloaded images
	I0103 20:37:20.306830  548415 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 20:37:20.306885  548415 preload.go:174] Found /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0103 20:37:20.306895  548415 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0103 20:37:20.307004  548415 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/config.json ...
	I0103 20:37:20.307025  548415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/config.json: {Name:mkf962c40932403ce78465e3b38cd3cdb374293b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:37:20.325055  548415 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I0103 20:37:20.325099  548415 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I0103 20:37:20.325113  548415 cache.go:194] Successfully downloaded all kic artifacts
	I0103 20:37:20.325153  548415 start.go:365] acquiring machines lock for force-systemd-flag-518436: {Name:mk53306c96806ea93c3c8cab719a89671f7a5b5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0103 20:37:20.325264  548415 start.go:369] acquired machines lock for "force-systemd-flag-518436" in 90.78µs
	I0103 20:37:20.325295  548415 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-518436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-518436 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:37:20.325465  548415 start.go:125] createHost starting for "" (driver="docker")
	I0103 20:37:18.925977  545961 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0103 20:37:18.939223  545961 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0103 20:37:18.939243  545961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0103 20:37:18.986115  545961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0103 20:37:19.934069  545961 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:37:19.944219  545961 system_pods.go:59] 7 kube-system pods found
	I0103 20:37:19.944249  545961 system_pods.go:61] "coredns-5dd5756b68-q766p" [88099227-8f36-44d9-b01c-1d8a5fca054a] Running
	I0103 20:37:19.944266  545961 system_pods.go:61] "etcd-pause-589189" [17cf6adf-0fad-4a34-b8fe-b2560e398e68] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0103 20:37:19.944279  545961 system_pods.go:61] "kindnet-xh476" [16420a9b-d68e-4a16-84d7-e6344f3b9f27] Running
	I0103 20:37:19.944295  545961 system_pods.go:61] "kube-apiserver-pause-589189" [d3e43690-d065-4288-a1f7-795767f523a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0103 20:37:19.944304  545961 system_pods.go:61] "kube-controller-manager-pause-589189" [3f7f4884-9a81-4f79-b9e6-9241b8d09840] Running
	I0103 20:37:19.944310  545961 system_pods.go:61] "kube-proxy-qptr2" [a55774e9-f310-4e29-be2d-81f71022a59b] Running
	I0103 20:37:19.944317  545961 system_pods.go:61] "kube-scheduler-pause-589189" [0efb433d-48a7-44d8-a799-e4be9725e28b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0103 20:37:19.944326  545961 system_pods.go:74] duration metric: took 10.238279ms to wait for pod list to return data ...
	I0103 20:37:19.944345  545961 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:37:19.948540  545961 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0103 20:37:19.949047  545961 node_conditions.go:123] node cpu capacity is 2
	I0103 20:37:19.949062  545961 node_conditions.go:105] duration metric: took 4.707765ms to run NodePressure ...
	I0103 20:37:19.949089  545961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0103 20:37:20.200831  545961 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0103 20:37:20.208962  545961 kubeadm.go:787] kubelet initialised
	I0103 20:37:20.208989  545961 kubeadm.go:788] duration metric: took 8.134367ms waiting for restarted kubelet to initialise ...
	I0103 20:37:20.208999  545961 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:37:20.216768  545961 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-q766p" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:20.225478  545961 pod_ready.go:92] pod "coredns-5dd5756b68-q766p" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:20.225505  545961 pod_ready.go:81] duration metric: took 8.703655ms waiting for pod "coredns-5dd5756b68-q766p" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:20.225521  545961 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:22.233525  545961 pod_ready.go:102] pod "etcd-pause-589189" in "kube-system" namespace has status "Ready":"False"
	I0103 20:37:20.329577  548415 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0103 20:37:20.329836  548415 start.go:159] libmachine.API.Create for "force-systemd-flag-518436" (driver="docker")
	I0103 20:37:20.329886  548415 client.go:168] LocalClient.Create starting
	I0103 20:37:20.329982  548415 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem
	I0103 20:37:20.330040  548415 main.go:141] libmachine: Decoding PEM data...
	I0103 20:37:20.330060  548415 main.go:141] libmachine: Parsing certificate...
	I0103 20:37:20.330115  548415 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem
	I0103 20:37:20.330137  548415 main.go:141] libmachine: Decoding PEM data...
	I0103 20:37:20.330153  548415 main.go:141] libmachine: Parsing certificate...
	I0103 20:37:20.330577  548415 cli_runner.go:164] Run: docker network inspect force-systemd-flag-518436 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0103 20:37:20.348060  548415 cli_runner.go:211] docker network inspect force-systemd-flag-518436 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0103 20:37:20.348146  548415 network_create.go:281] running [docker network inspect force-systemd-flag-518436] to gather additional debugging logs...
	I0103 20:37:20.348167  548415 cli_runner.go:164] Run: docker network inspect force-systemd-flag-518436
	W0103 20:37:20.370568  548415 cli_runner.go:211] docker network inspect force-systemd-flag-518436 returned with exit code 1
	I0103 20:37:20.370609  548415 network_create.go:284] error running [docker network inspect force-systemd-flag-518436]: docker network inspect force-systemd-flag-518436: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-518436 not found
	I0103 20:37:20.370633  548415 network_create.go:286] output of [docker network inspect force-systemd-flag-518436]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-518436 not found
	
	** /stderr **
	I0103 20:37:20.370737  548415 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0103 20:37:20.389286  548415 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e48a1c7f0405 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:af:08:39:14} reservation:<nil>}
	I0103 20:37:20.389834  548415 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5ad9a395bb96 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:d1:45:f6:7e} reservation:<nil>}
	I0103 20:37:20.390465  548415 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025a65d0}
	I0103 20:37:20.390504  548415 network_create.go:124] attempt to create docker network force-systemd-flag-518436 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0103 20:37:20.390609  548415 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-518436 force-systemd-flag-518436
	I0103 20:37:20.472365  548415 network_create.go:108] docker network force-systemd-flag-518436 192.168.67.0/24 created
	I0103 20:37:20.472394  548415 kic.go:121] calculated static IP "192.168.67.2" for the "force-systemd-flag-518436" container
	I0103 20:37:20.472474  548415 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0103 20:37:20.492093  548415 cli_runner.go:164] Run: docker volume create force-systemd-flag-518436 --label name.minikube.sigs.k8s.io=force-systemd-flag-518436 --label created_by.minikube.sigs.k8s.io=true
	I0103 20:37:20.512591  548415 oci.go:103] Successfully created a docker volume force-systemd-flag-518436
	I0103 20:37:20.512680  548415 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-518436-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-518436 --entrypoint /usr/bin/test -v force-systemd-flag-518436:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I0103 20:37:21.156931  548415 oci.go:107] Successfully prepared a docker volume force-systemd-flag-518436
	I0103 20:37:21.156992  548415 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:37:21.157013  548415 kic.go:194] Starting extracting preloaded images to volume ...
	I0103 20:37:21.157103  548415 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-518436:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I0103 20:37:24.732431  545961 pod_ready.go:102] pod "etcd-pause-589189" in "kube-system" namespace has status "Ready":"False"
	I0103 20:37:26.232731  545961 pod_ready.go:92] pod "etcd-pause-589189" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:26.232751  545961 pod_ready.go:81] duration metric: took 6.007221011s waiting for pod "etcd-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.232765  545961 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.240943  545961 pod_ready.go:92] pod "kube-apiserver-pause-589189" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:26.240963  545961 pod_ready.go:81] duration metric: took 8.191096ms waiting for pod "kube-apiserver-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.240974  545961 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.254218  545961 pod_ready.go:92] pod "kube-controller-manager-pause-589189" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:26.254284  545961 pod_ready.go:81] duration metric: took 13.301408ms waiting for pod "kube-controller-manager-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.254311  545961 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qptr2" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.260999  545961 pod_ready.go:92] pod "kube-proxy-qptr2" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:26.261067  545961 pod_ready.go:81] duration metric: took 6.726059ms waiting for pod "kube-proxy-qptr2" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.261091  545961 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.271995  545961 pod_ready.go:92] pod "kube-scheduler-pause-589189" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:26.272063  545961 pod_ready.go:81] duration metric: took 10.948596ms waiting for pod "kube-scheduler-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:26.272088  545961 pod_ready.go:38] duration metric: took 6.063077977s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:37:26.272132  545961 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0103 20:37:26.285009  545961 ops.go:34] apiserver oom_adj: -16
	I0103 20:37:26.285080  545961 kubeadm.go:640] restartCluster took 33.225776576s
	I0103 20:37:26.285102  545961 kubeadm.go:406] StartCluster complete in 33.309561555s
	I0103 20:37:26.285130  545961 settings.go:142] acquiring lock: {Name:mk35e0b2d8071191a72193c66ba9549131012420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:37:26.285219  545961 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 20:37:26.287070  545961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/kubeconfig: {Name:mkcf9b222e1b36afc1c2e4e412234b0c105c9bc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:37:26.287399  545961 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0103 20:37:26.287749  545961 config.go:182] Loaded profile config "pause-589189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:37:26.288040  545961 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0103 20:37:26.290171  545961 out.go:177] * Enabled addons: 
	I0103 20:37:26.289023  545961 kapi.go:59] client config for pause-589189: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/profiles/pause-589189/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/profiles/pause-589189/client.key", CAFile:"/home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bfa00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0103 20:37:26.294747  545961 addons.go:508] enable addons completed in 6.723835ms: enabled=[]
	I0103 20:37:26.299312  545961 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-589189" context rescaled to 1 replicas
	I0103 20:37:26.299386  545961 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0103 20:37:26.302597  545961 out.go:177] * Verifying Kubernetes components...
	I0103 20:37:26.304313  545961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:37:26.490971  545961 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0103 20:37:26.491025  545961 node_ready.go:35] waiting up to 6m0s for node "pause-589189" to be "Ready" ...
	I0103 20:37:26.499892  545961 node_ready.go:49] node "pause-589189" has status "Ready":"True"
	I0103 20:37:26.499920  545961 node_ready.go:38] duration metric: took 8.881269ms waiting for node "pause-589189" to be "Ready" ...
	I0103 20:37:26.499933  545961 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:37:26.632393  545961 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-q766p" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:27.034670  545961 pod_ready.go:92] pod "coredns-5dd5756b68-q766p" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:27.034702  545961 pod_ready.go:81] duration metric: took 402.276796ms waiting for pod "coredns-5dd5756b68-q766p" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:27.034714  545961 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:27.431905  545961 pod_ready.go:92] pod "etcd-pause-589189" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:27.431932  545961 pod_ready.go:81] duration metric: took 397.20932ms waiting for pod "etcd-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:27.431950  545961 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:27.831111  545961 pod_ready.go:92] pod "kube-apiserver-pause-589189" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:27.831146  545961 pod_ready.go:81] duration metric: took 399.185973ms waiting for pod "kube-apiserver-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:27.831160  545961 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:28.230301  545961 pod_ready.go:92] pod "kube-controller-manager-pause-589189" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:28.230325  545961 pod_ready.go:81] duration metric: took 399.146904ms waiting for pod "kube-controller-manager-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:28.230343  545961 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qptr2" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:28.630376  545961 pod_ready.go:92] pod "kube-proxy-qptr2" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:28.630398  545961 pod_ready.go:81] duration metric: took 400.047144ms waiting for pod "kube-proxy-qptr2" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:28.630410  545961 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:29.029577  545961 pod_ready.go:92] pod "kube-scheduler-pause-589189" in "kube-system" namespace has status "Ready":"True"
	I0103 20:37:29.029601  545961 pod_ready.go:81] duration metric: took 399.183648ms waiting for pod "kube-scheduler-pause-589189" in "kube-system" namespace to be "Ready" ...
	I0103 20:37:29.029610  545961 pod_ready.go:38] duration metric: took 2.529663008s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0103 20:37:29.029625  545961 api_server.go:52] waiting for apiserver process to appear ...
	I0103 20:37:29.029688  545961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:37:29.043780  545961 api_server.go:72] duration metric: took 2.744350093s to wait for apiserver process to appear ...
	I0103 20:37:29.043806  545961 api_server.go:88] waiting for apiserver healthz status ...
	I0103 20:37:29.043825  545961 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0103 20:37:29.052906  545961 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0103 20:37:29.054294  545961 api_server.go:141] control plane version: v1.28.4
	I0103 20:37:29.054318  545961 api_server.go:131] duration metric: took 10.505189ms to wait for apiserver health ...
	I0103 20:37:29.054328  545961 system_pods.go:43] waiting for kube-system pods to appear ...
	I0103 20:37:29.234659  545961 system_pods.go:59] 7 kube-system pods found
	I0103 20:37:29.234734  545961 system_pods.go:61] "coredns-5dd5756b68-q766p" [88099227-8f36-44d9-b01c-1d8a5fca054a] Running
	I0103 20:37:29.234753  545961 system_pods.go:61] "etcd-pause-589189" [17cf6adf-0fad-4a34-b8fe-b2560e398e68] Running
	I0103 20:37:29.234776  545961 system_pods.go:61] "kindnet-xh476" [16420a9b-d68e-4a16-84d7-e6344f3b9f27] Running
	I0103 20:37:29.234806  545961 system_pods.go:61] "kube-apiserver-pause-589189" [d3e43690-d065-4288-a1f7-795767f523a3] Running
	I0103 20:37:29.234830  545961 system_pods.go:61] "kube-controller-manager-pause-589189" [3f7f4884-9a81-4f79-b9e6-9241b8d09840] Running
	I0103 20:37:29.234849  545961 system_pods.go:61] "kube-proxy-qptr2" [a55774e9-f310-4e29-be2d-81f71022a59b] Running
	I0103 20:37:29.234868  545961 system_pods.go:61] "kube-scheduler-pause-589189" [0efb433d-48a7-44d8-a799-e4be9725e28b] Running
	I0103 20:37:29.234903  545961 system_pods.go:74] duration metric: took 180.56878ms to wait for pod list to return data ...
	I0103 20:37:29.234925  545961 default_sa.go:34] waiting for default service account to be created ...
	I0103 20:37:29.429334  545961 default_sa.go:45] found service account: "default"
	I0103 20:37:29.429428  545961 default_sa.go:55] duration metric: took 194.470442ms for default service account to be created ...
	I0103 20:37:29.429496  545961 system_pods.go:116] waiting for k8s-apps to be running ...
	I0103 20:37:29.644145  545961 system_pods.go:86] 7 kube-system pods found
	I0103 20:37:29.644232  545961 system_pods.go:89] "coredns-5dd5756b68-q766p" [88099227-8f36-44d9-b01c-1d8a5fca054a] Running
	I0103 20:37:29.644255  545961 system_pods.go:89] "etcd-pause-589189" [17cf6adf-0fad-4a34-b8fe-b2560e398e68] Running
	I0103 20:37:29.644281  545961 system_pods.go:89] "kindnet-xh476" [16420a9b-d68e-4a16-84d7-e6344f3b9f27] Running
	I0103 20:37:29.644345  545961 system_pods.go:89] "kube-apiserver-pause-589189" [d3e43690-d065-4288-a1f7-795767f523a3] Running
	I0103 20:37:29.644384  545961 system_pods.go:89] "kube-controller-manager-pause-589189" [3f7f4884-9a81-4f79-b9e6-9241b8d09840] Running
	I0103 20:37:29.644409  545961 system_pods.go:89] "kube-proxy-qptr2" [a55774e9-f310-4e29-be2d-81f71022a59b] Running
	I0103 20:37:29.644431  545961 system_pods.go:89] "kube-scheduler-pause-589189" [0efb433d-48a7-44d8-a799-e4be9725e28b] Running
	I0103 20:37:29.644464  545961 system_pods.go:126] duration metric: took 214.931192ms to wait for k8s-apps to be running ...
	I0103 20:37:29.644489  545961 system_svc.go:44] waiting for kubelet service to be running ....
	I0103 20:37:29.644595  545961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:37:29.675819  545961 system_svc.go:56] duration metric: took 31.320051ms WaitForService to wait for kubelet.
	I0103 20:37:29.675843  545961 kubeadm.go:581] duration metric: took 3.376420632s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0103 20:37:29.675862  545961 node_conditions.go:102] verifying NodePressure condition ...
	I0103 20:37:29.831502  545961 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0103 20:37:29.831665  545961 node_conditions.go:123] node cpu capacity is 2
	I0103 20:37:29.831687  545961 node_conditions.go:105] duration metric: took 155.81849ms to run NodePressure ...
	I0103 20:37:29.831707  545961 start.go:228] waiting for startup goroutines ...
	I0103 20:37:29.831714  545961 start.go:233] waiting for cluster config update ...
	I0103 20:37:29.831725  545961 start.go:242] writing updated cluster config ...
	I0103 20:37:29.832091  545961 ssh_runner.go:195] Run: rm -f paused
	I0103 20:37:29.935271  545961 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0103 20:37:29.937646  545961 out.go:177] * Done! kubectl is now configured to use "pause-589189" cluster and "default" namespace by default
	I0103 20:37:25.569955  548415 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-518436:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (4.412814889s)
	I0103 20:37:25.569985  548415 kic.go:203] duration metric: took 4.412969 seconds to extract preloaded images to volume
	W0103 20:37:25.570125  548415 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0103 20:37:25.570252  548415 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0103 20:37:25.655912  548415 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-518436 --name force-systemd-flag-518436 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-518436 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-518436 --network force-systemd-flag-518436 --ip 192.168.67.2 --volume force-systemd-flag-518436:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0103 20:37:26.032261  548415 cli_runner.go:164] Run: docker container inspect force-systemd-flag-518436 --format={{.State.Running}}
	I0103 20:37:26.058968  548415 cli_runner.go:164] Run: docker container inspect force-systemd-flag-518436 --format={{.State.Status}}
	I0103 20:37:26.090918  548415 cli_runner.go:164] Run: docker exec force-systemd-flag-518436 stat /var/lib/dpkg/alternatives/iptables
	I0103 20:37:26.148665  548415 oci.go:144] the created container "force-systemd-flag-518436" has a running status.
	I0103 20:37:26.148701  548415 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/force-systemd-flag-518436/id_rsa...
	I0103 20:37:26.663711  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/force-systemd-flag-518436/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0103 20:37:26.663761  548415 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17885-409390/.minikube/machines/force-systemd-flag-518436/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0103 20:37:26.692559  548415 cli_runner.go:164] Run: docker container inspect force-systemd-flag-518436 --format={{.State.Status}}
	I0103 20:37:26.722268  548415 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0103 20:37:26.722294  548415 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-518436 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0103 20:37:26.820656  548415 cli_runner.go:164] Run: docker container inspect force-systemd-flag-518436 --format={{.State.Status}}
	I0103 20:37:26.850737  548415 machine.go:88] provisioning docker machine ...
	I0103 20:37:26.850770  548415 ubuntu.go:169] provisioning hostname "force-systemd-flag-518436"
	I0103 20:37:26.850833  548415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-518436
	I0103 20:37:26.878654  548415 main.go:141] libmachine: Using SSH client type: native
	I0103 20:37:26.879103  548415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33299 <nil> <nil>}
	I0103 20:37:26.879124  548415 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-518436 && echo "force-systemd-flag-518436" | sudo tee /etc/hostname
	I0103 20:37:27.110950  548415 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-518436
	
	I0103 20:37:27.111046  548415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-518436
	I0103 20:37:27.140534  548415 main.go:141] libmachine: Using SSH client type: native
	I0103 20:37:27.140980  548415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33299 <nil> <nil>}
	I0103 20:37:27.141015  548415 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-518436' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-518436/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-518436' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0103 20:37:27.300774  548415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0103 20:37:27.300844  548415 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17885-409390/.minikube CaCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17885-409390/.minikube}
	I0103 20:37:27.300876  548415 ubuntu.go:177] setting up certificates
	I0103 20:37:27.300897  548415 provision.go:83] configureAuth start
	I0103 20:37:27.300992  548415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-518436
	I0103 20:37:27.326572  548415 provision.go:138] copyHostCerts
	I0103 20:37:27.326611  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem
	I0103 20:37:27.326641  548415 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem, removing ...
	I0103 20:37:27.326647  548415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem
	I0103 20:37:27.326721  548415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/key.pem (1679 bytes)
	I0103 20:37:27.326801  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem
	I0103 20:37:27.326817  548415 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem, removing ...
	I0103 20:37:27.326821  548415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem
	I0103 20:37:27.326856  548415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/ca.pem (1078 bytes)
	I0103 20:37:27.326896  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem
	I0103 20:37:27.326912  548415 exec_runner.go:144] found /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem, removing ...
	I0103 20:37:27.326916  548415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem
	I0103 20:37:27.326939  548415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17885-409390/.minikube/cert.pem (1123 bytes)
	I0103 20:37:27.326980  548415 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-518436 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube force-systemd-flag-518436]
	I0103 20:37:27.868454  548415 provision.go:172] copyRemoteCerts
	I0103 20:37:27.868525  548415 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0103 20:37:27.868578  548415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-518436
	I0103 20:37:27.886912  548415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33299 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/force-systemd-flag-518436/id_rsa Username:docker}
	I0103 20:37:27.990394  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0103 20:37:27.990456  548415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0103 20:37:28.024646  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0103 20:37:28.024708  548415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0103 20:37:28.055214  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0103 20:37:28.055280  548415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0103 20:37:28.089612  548415 provision.go:86] duration metric: configureAuth took 788.66504ms
	I0103 20:37:28.089641  548415 ubuntu.go:193] setting minikube options for container-runtime
	I0103 20:37:28.089830  548415 config.go:182] Loaded profile config "force-systemd-flag-518436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:37:28.089944  548415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-518436
	I0103 20:37:28.113076  548415 main.go:141] libmachine: Using SSH client type: native
	I0103 20:37:28.114022  548415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33299 <nil> <nil>}
	I0103 20:37:28.114059  548415 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0103 20:37:28.373062  548415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0103 20:37:28.373090  548415 machine.go:91] provisioned docker machine in 1.522329945s
	I0103 20:37:28.373100  548415 client.go:171] LocalClient.Create took 8.043204665s
	I0103 20:37:28.373114  548415 start.go:167] duration metric: libmachine.API.Create for "force-systemd-flag-518436" took 8.043278871s
	I0103 20:37:28.373121  548415 start.go:300] post-start starting for "force-systemd-flag-518436" (driver="docker")
	I0103 20:37:28.373131  548415 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0103 20:37:28.373198  548415 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0103 20:37:28.373255  548415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-518436
	I0103 20:37:28.392368  548415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33299 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/force-systemd-flag-518436/id_rsa Username:docker}
	I0103 20:37:28.494119  548415 ssh_runner.go:195] Run: cat /etc/os-release
	I0103 20:37:28.498455  548415 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0103 20:37:28.498491  548415 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0103 20:37:28.498509  548415 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0103 20:37:28.498543  548415 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0103 20:37:28.498555  548415 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-409390/.minikube/addons for local assets ...
	I0103 20:37:28.498609  548415 filesync.go:126] Scanning /home/jenkins/minikube-integration/17885-409390/.minikube/files for local assets ...
	I0103 20:37:28.498707  548415 filesync.go:149] local asset: /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem -> 4147632.pem in /etc/ssl/certs
	I0103 20:37:28.498723  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem -> /etc/ssl/certs/4147632.pem
	I0103 20:37:28.498838  548415 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0103 20:37:28.510403  548415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem --> /etc/ssl/certs/4147632.pem (1708 bytes)
	I0103 20:37:28.542730  548415 start.go:303] post-start completed in 169.593982ms
	I0103 20:37:28.543114  548415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-518436
	I0103 20:37:28.560707  548415 profile.go:148] Saving config to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/config.json ...
	I0103 20:37:28.561000  548415 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 20:37:28.561056  548415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-518436
	I0103 20:37:28.581425  548415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33299 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/force-systemd-flag-518436/id_rsa Username:docker}
	I0103 20:37:28.676809  548415 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0103 20:37:28.682820  548415 start.go:128] duration metric: createHost completed in 8.357334537s
	I0103 20:37:28.682847  548415 start.go:83] releasing machines lock for "force-systemd-flag-518436", held for 8.3575692s
	I0103 20:37:28.682921  548415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-518436
	I0103 20:37:28.700645  548415 ssh_runner.go:195] Run: cat /version.json
	I0103 20:37:28.700698  548415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-518436
	I0103 20:37:28.700733  548415 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0103 20:37:28.700821  548415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-518436
	I0103 20:37:28.719973  548415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33299 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/force-systemd-flag-518436/id_rsa Username:docker}
	I0103 20:37:28.720224  548415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33299 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/force-systemd-flag-518436/id_rsa Username:docker}
	I0103 20:37:28.950630  548415 ssh_runner.go:195] Run: systemctl --version
	I0103 20:37:28.956746  548415 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0103 20:37:29.106946  548415 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0103 20:37:29.113651  548415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:37:29.144821  548415 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0103 20:37:29.144909  548415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0103 20:37:29.197464  548415 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0103 20:37:29.197533  548415 start.go:475] detecting cgroup driver to use...
	I0103 20:37:29.197558  548415 start.go:479] using "systemd" cgroup driver as enforced via flags
	I0103 20:37:29.197658  548415 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0103 20:37:29.220050  548415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0103 20:37:29.235516  548415 docker.go:203] disabling cri-docker service (if available) ...
	I0103 20:37:29.235588  548415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0103 20:37:29.252403  548415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0103 20:37:29.269479  548415 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0103 20:37:29.389727  548415 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0103 20:37:29.547320  548415 docker.go:219] disabling docker service ...
	I0103 20:37:29.547402  548415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0103 20:37:29.574324  548415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0103 20:37:29.590224  548415 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0103 20:37:29.722753  548415 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0103 20:37:29.830084  548415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0103 20:37:29.851009  548415 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0103 20:37:29.875964  548415 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0103 20:37:29.876039  548415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:37:29.889720  548415 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0103 20:37:29.889798  548415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:37:29.903266  548415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:37:29.917487  548415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0103 20:37:29.932813  548415 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0103 20:37:29.950124  548415 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0103 20:37:29.974440  548415 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0103 20:37:29.997596  548415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0103 20:37:30.174440  548415 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0103 20:37:30.360802  548415 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0103 20:37:30.360888  548415 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0103 20:37:30.372146  548415 start.go:543] Will wait 60s for crictl version
	I0103 20:37:30.372228  548415 ssh_runner.go:195] Run: which crictl
	I0103 20:37:30.376905  548415 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0103 20:37:30.438682  548415 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0103 20:37:30.438773  548415 ssh_runner.go:195] Run: crio --version
	I0103 20:37:30.497115  548415 ssh_runner.go:195] Run: crio --version
	I0103 20:37:30.562730  548415 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0103 20:37:30.564635  548415 cli_runner.go:164] Run: docker network inspect force-systemd-flag-518436 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0103 20:37:30.591265  548415 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0103 20:37:30.595987  548415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:37:30.610935  548415 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 20:37:30.611019  548415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:37:30.723127  548415 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 20:37:30.723154  548415 crio.go:415] Images already preloaded, skipping extraction
	I0103 20:37:30.723208  548415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0103 20:37:30.781980  548415 crio.go:496] all images are preloaded for cri-o runtime.
	I0103 20:37:30.782006  548415 cache_images.go:84] Images are preloaded, skipping loading
	I0103 20:37:30.782079  548415 ssh_runner.go:195] Run: crio config
	I0103 20:37:30.884939  548415 cni.go:84] Creating CNI manager for ""
	I0103 20:37:30.884961  548415 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 20:37:30.884993  548415 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0103 20:37:30.885026  548415 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-518436 NodeName:force-systemd-flag-518436 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0103 20:37:30.885187  548415 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-518436"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0103 20:37:30.885335  548415 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=force-systemd-flag-518436 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-518436 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0103 20:37:30.885417  548415 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0103 20:37:30.896742  548415 binaries.go:44] Found k8s binaries, skipping transfer
	I0103 20:37:30.896832  548415 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0103 20:37:30.908670  548415 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (435 bytes)
	I0103 20:37:30.931559  548415 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0103 20:37:30.963096  548415 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0103 20:37:30.993057  548415 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0103 20:37:30.998558  548415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0103 20:37:31.014954  548415 certs.go:56] Setting up /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436 for IP: 192.168.67.2
	I0103 20:37:31.015023  548415 certs.go:190] acquiring lock for shared ca certs: {Name:mk7a87d13d39d2defe5d349d371b78fa1f1e95bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:37:31.015159  548415 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.key
	I0103 20:37:31.015218  548415 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.key
	I0103 20:37:31.015265  548415 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/client.key
	I0103 20:37:31.015274  548415 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/client.crt with IP's: []
	I0103 20:37:31.208134  548415 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/client.crt ...
	I0103 20:37:31.208204  548415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/client.crt: {Name:mk91b8f3b9f2c90c004a0889c126c8eb8ede5993 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:37:31.208442  548415 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/client.key ...
	I0103 20:37:31.208479  548415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/client.key: {Name:mk64333c2d80a3ad597de71e3d64a50b00b4e05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:37:31.208643  548415 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/apiserver.key.c7fa3a9e
	I0103 20:37:31.208683  548415 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0103 20:37:31.651385  548415 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/apiserver.crt.c7fa3a9e ...
	I0103 20:37:31.651457  548415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/apiserver.crt.c7fa3a9e: {Name:mkc91a85a6145d8dce5aa87fc0d952f8a5302842 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:37:31.651692  548415 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/apiserver.key.c7fa3a9e ...
	I0103 20:37:31.651731  548415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/apiserver.key.c7fa3a9e: {Name:mkf0954611963b537ccc5da9a59d22c76d3e5c8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:37:31.651867  548415 certs.go:337] copying /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/apiserver.crt
	I0103 20:37:31.652002  548415 certs.go:341] copying /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/apiserver.key
	I0103 20:37:31.652106  548415 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/proxy-client.key
	I0103 20:37:31.652144  548415 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/proxy-client.crt with IP's: []
	I0103 20:37:32.316407  548415 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/proxy-client.crt ...
	I0103 20:37:32.316471  548415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/proxy-client.crt: {Name:mk1b28bbe43272026af786d8904adc5a7cbf7343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:37:32.316714  548415 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/proxy-client.key ...
	I0103 20:37:32.316758  548415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/proxy-client.key: {Name:mk29314be11ecb85d3cc6f13a054578b58ec4b06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0103 20:37:32.316897  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0103 20:37:32.316959  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0103 20:37:32.316996  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0103 20:37:32.317036  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0103 20:37:32.317071  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0103 20:37:32.317106  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0103 20:37:32.317149  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0103 20:37:32.317184  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0103 20:37:32.317287  548415 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/414763.pem (1338 bytes)
	W0103 20:37:32.317349  548415 certs.go:433] ignoring /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/414763_empty.pem, impossibly tiny 0 bytes
	I0103 20:37:32.317375  548415 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca-key.pem (1679 bytes)
	I0103 20:37:32.317422  548415 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/ca.pem (1078 bytes)
	I0103 20:37:32.317478  548415 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/cert.pem (1123 bytes)
	I0103 20:37:32.317521  548415 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/home/jenkins/minikube-integration/17885-409390/.minikube/certs/key.pem (1679 bytes)
	I0103 20:37:32.317599  548415 certs.go:437] found cert: /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem (1708 bytes)
	I0103 20:37:32.317650  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/certs/414763.pem -> /usr/share/ca-certificates/414763.pem
	I0103 20:37:32.317687  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem -> /usr/share/ca-certificates/4147632.pem
	I0103 20:37:32.317725  548415 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:37:32.318323  548415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0103 20:37:32.350260  548415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0103 20:37:32.383458  548415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0103 20:37:32.424984  548415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/force-systemd-flag-518436/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0103 20:37:32.459389  548415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0103 20:37:32.495795  548415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0103 20:37:32.529922  548415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0103 20:37:32.561849  548415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0103 20:37:32.593527  548415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/certs/414763.pem --> /usr/share/ca-certificates/414763.pem (1338 bytes)
	I0103 20:37:32.627552  548415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/ssl/certs/4147632.pem --> /usr/share/ca-certificates/4147632.pem (1708 bytes)
	I0103 20:37:32.656617  548415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17885-409390/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0103 20:37:32.689431  548415 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0103 20:37:32.717659  548415 ssh_runner.go:195] Run: openssl version
	I0103 20:37:32.729451  548415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/414763.pem && ln -fs /usr/share/ca-certificates/414763.pem /etc/ssl/certs/414763.pem"
	I0103 20:37:32.745552  548415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/414763.pem
	I0103 20:37:32.750369  548415 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  3 20:01 /usr/share/ca-certificates/414763.pem
	I0103 20:37:32.750475  548415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/414763.pem
	I0103 20:37:32.761465  548415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/414763.pem /etc/ssl/certs/51391683.0"
	I0103 20:37:32.775852  548415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4147632.pem && ln -fs /usr/share/ca-certificates/4147632.pem /etc/ssl/certs/4147632.pem"
	I0103 20:37:32.790806  548415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4147632.pem
	I0103 20:37:32.795812  548415 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  3 20:01 /usr/share/ca-certificates/4147632.pem
	I0103 20:37:32.795873  548415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4147632.pem
	I0103 20:37:32.804935  548415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4147632.pem /etc/ssl/certs/3ec20f2e.0"
	I0103 20:37:32.817956  548415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0103 20:37:32.832221  548415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:37:32.838665  548415 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  3 19:53 /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:37:32.838774  548415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0103 20:37:32.848498  548415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0103 20:37:32.869412  548415 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0103 20:37:32.874827  548415 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0103 20:37:32.874929  548415 kubeadm.go:404] StartCluster: {Name:force-systemd-flag-518436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-518436 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:37:32.875131  548415 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0103 20:37:32.875240  548415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0103 20:37:32.933374  548415 cri.go:89] found id: ""
	I0103 20:37:32.933507  548415 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0103 20:37:32.946298  548415 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0103 20:37:32.957879  548415 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0103 20:37:32.957992  548415 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0103 20:37:32.972394  548415 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0103 20:37:32.972450  548415 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0103 20:37:33.041911  548415 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0103 20:37:33.043070  548415 kubeadm.go:322] [preflight] Running pre-flight checks
	I0103 20:37:33.105796  548415 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0103 20:37:33.105873  548415 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0103 20:37:33.105926  548415 kubeadm.go:322] OS: Linux
	I0103 20:37:33.105972  548415 kubeadm.go:322] CGROUPS_CPU: enabled
	I0103 20:37:33.106031  548415 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0103 20:37:33.106087  548415 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0103 20:37:33.106143  548415 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0103 20:37:33.106193  548415 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0103 20:37:33.106253  548415 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0103 20:37:33.106307  548415 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0103 20:37:33.106366  548415 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0103 20:37:33.106421  548415 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0103 20:37:33.253504  548415 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0103 20:37:33.253614  548415 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0103 20:37:33.253703  548415 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0103 20:37:33.679579  548415 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Jan 03 20:37:17 pause-589189 crio[2426]: time="2024-01-03 20:37:17.777530582Z" level=info msg="Creating container: kube-system/coredns-5dd5756b68-q766p/coredns" id=54fc4dce-ef9b-48da-a8a3-93ce997d0bd0 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 03 20:37:17 pause-589189 crio[2426]: time="2024-01-03 20:37:17.778056326Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 03 20:37:17 pause-589189 crio[2426]: time="2024-01-03 20:37:17.850856118Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/985dcb3e125c03e9f511fa98dfd4d8f16b2bc3b8a7017e5919c225fa358ba130/merged/etc/passwd: no such file or directory"
	Jan 03 20:37:17 pause-589189 crio[2426]: time="2024-01-03 20:37:17.850913208Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/985dcb3e125c03e9f511fa98dfd4d8f16b2bc3b8a7017e5919c225fa358ba130/merged/etc/group: no such file or directory"
	Jan 03 20:37:17 pause-589189 crio[2426]: time="2024-01-03 20:37:17.994216456Z" level=info msg="Created container 30eedc6a46c84a94e2c3c67e847250b4c693a24f38b24dc4c8d2ffe89907eb9c: kube-system/kindnet-xh476/kindnet-cni" id=1542492e-ad2e-41bd-b312-3446edb1f31a name=/runtime.v1.RuntimeService/CreateContainer
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:17.996298248Z" level=info msg="Starting container: 30eedc6a46c84a94e2c3c67e847250b4c693a24f38b24dc4c8d2ffe89907eb9c" id=5fe5b598-d2b8-46b6-aa09-91c57e5f8a8b name=/runtime.v1.RuntimeService/StartContainer
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.017269297Z" level=info msg="Started container" PID=3130 containerID=30eedc6a46c84a94e2c3c67e847250b4c693a24f38b24dc4c8d2ffe89907eb9c description=kube-system/kindnet-xh476/kindnet-cni id=5fe5b598-d2b8-46b6-aa09-91c57e5f8a8b name=/runtime.v1.RuntimeService/StartContainer sandboxID=14ee1f0f3f9368277ea49cbedd43096b3138b778b533d2940e65d4dde11de2f1
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.074100874Z" level=info msg="Created container d96acc06cecf50de1213dd8147d204b73f194bb3e86ac56b91e2b6ac29fe6827: kube-system/coredns-5dd5756b68-q766p/coredns" id=54fc4dce-ef9b-48da-a8a3-93ce997d0bd0 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.077178787Z" level=info msg="Starting container: d96acc06cecf50de1213dd8147d204b73f194bb3e86ac56b91e2b6ac29fe6827" id=15891f20-8630-42db-a50d-fcace8abf2d5 name=/runtime.v1.RuntimeService/StartContainer
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.109227857Z" level=info msg="Started container" PID=3159 containerID=d96acc06cecf50de1213dd8147d204b73f194bb3e86ac56b91e2b6ac29fe6827 description=kube-system/coredns-5dd5756b68-q766p/coredns id=15891f20-8630-42db-a50d-fcace8abf2d5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=959adb8df96711128ac2f95719313a3663564c9dde30a4e9e60a006ae7c618d5
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.128864058Z" level=info msg="Created container ebd980d6d02c4d4ad41835e2d47eb8c29c0cfbfe84c3712dc680224e5bd93e89: kube-system/kube-proxy-qptr2/kube-proxy" id=188eba73-0716-4235-a32f-46b0b051febf name=/runtime.v1.RuntimeService/CreateContainer
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.130809080Z" level=info msg="Starting container: ebd980d6d02c4d4ad41835e2d47eb8c29c0cfbfe84c3712dc680224e5bd93e89" id=a141357d-a1e3-4e15-b296-4a30bd3c1cc0 name=/runtime.v1.RuntimeService/StartContainer
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.228076636Z" level=info msg="Started container" PID=3165 containerID=ebd980d6d02c4d4ad41835e2d47eb8c29c0cfbfe84c3712dc680224e5bd93e89 description=kube-system/kube-proxy-qptr2/kube-proxy id=a141357d-a1e3-4e15-b296-4a30bd3c1cc0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2502534472b45cfc8467d7909c5fc2b7ea2fda234d7ab31cbad29a69a74d9e1d
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.494639604Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.508512637Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.508549675Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.508567176Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.527729595Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.527767132Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.527784765Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.540670457Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.540715896Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.540749216Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.548352612Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 03 20:37:18 pause-589189 crio[2426]: time="2024-01-03 20:37:18.548395869Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ebd980d6d02c4       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39   17 seconds ago      Running             kube-proxy                2                   2502534472b45       kube-proxy-qptr2
	d96acc06cecf5       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   17 seconds ago      Running             coredns                   2                   959adb8df9671       coredns-5dd5756b68-q766p
	30eedc6a46c84       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   17 seconds ago      Running             kindnet-cni               2                   14ee1f0f3f936       kindnet-xh476
	cbeda173effd8       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b   26 seconds ago      Running             kube-controller-manager   2                   70f8e9f178384       kube-controller-manager-pause-589189
	8aa0a9ca18033       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54   27 seconds ago      Running             kube-scheduler            2                   8591909c742a1       kube-scheduler-pause-589189
	35e5b55bdfab7       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   27 seconds ago      Running             etcd                      2                   f2b5abd3abda2       etcd-pause-589189
	90f6983570911       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419   27 seconds ago      Running             kube-apiserver            2                   b8e42777c245a       kube-apiserver-pause-589189
	0e822a87ca22e       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   40 seconds ago      Exited              etcd                      1                   f2b5abd3abda2       etcd-pause-589189
	fe332dccccd3c       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419   43 seconds ago      Exited              kube-apiserver            1                   b8e42777c245a       kube-apiserver-pause-589189
	d22ad169c567e       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   43 seconds ago      Exited              coredns                   1                   959adb8df9671       coredns-5dd5756b68-q766p
	85eb04634a23f       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39   43 seconds ago      Exited              kube-proxy                1                   2502534472b45       kube-proxy-qptr2
	78f150ce8a96b       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   43 seconds ago      Exited              kindnet-cni               1                   14ee1f0f3f936       kindnet-xh476
	48229eac57ee1       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b   44 seconds ago      Exited              kube-controller-manager   1                   70f8e9f178384       kube-controller-manager-pause-589189
	0b4de2b22c7b4       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54   44 seconds ago      Exited              kube-scheduler            1                   8591909c742a1       kube-scheduler-pause-589189
	
	
	==> coredns [d22ad169c567e2011ce0d9196523e4a33db859f3ebadfc4f50c24d4372357ae0] <==
	
	
	==> coredns [d96acc06cecf50de1213dd8147d204b73f194bb3e86ac56b91e2b6ac29fe6827] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45345 - 9020 "HINFO IN 4866547591508166232.2971136184924569604. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029445235s
	
	
	==> describe nodes <==
	Name:               pause-589189
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-589189
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
	                    minikube.k8s.io/name=pause-589189
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_03T20_35_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jan 2024 20:35:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-589189
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jan 2024 20:37:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jan 2024 20:37:17 +0000   Wed, 03 Jan 2024 20:35:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jan 2024 20:37:17 +0000   Wed, 03 Jan 2024 20:35:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jan 2024 20:37:17 +0000   Wed, 03 Jan 2024 20:35:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jan 2024 20:37:17 +0000   Wed, 03 Jan 2024 20:36:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-589189
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 209f005ad7c74580832a102266efa806
	  System UUID:                77b79695-f93f-4462-92b1-48d0a71f7d17
	  Boot ID:                    75f8dc93-969c-4083-a399-3fa01ac68612
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-q766p                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     87s
	  kube-system                 etcd-pause-589189                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         101s
	  kube-system                 kindnet-xh476                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      87s
	  kube-system                 kube-apiserver-pause-589189             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-controller-manager-pause-589189    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-proxy-qptr2                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-scheduler-pause-589189             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 85s                  kube-proxy       
	  Normal  Starting                 17s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  109s (x8 over 109s)  kubelet          Node pause-589189 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s (x8 over 109s)  kubelet          Node pause-589189 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s (x8 over 109s)  kubelet          Node pause-589189 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     100s                 kubelet          Node pause-589189 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  100s                 kubelet          Node pause-589189 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    100s                 kubelet          Node pause-589189 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 100s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           88s                  node-controller  Node pause-589189 event: Registered Node pause-589189 in Controller
	  Normal  NodeReady                55s                  kubelet          Node pause-589189 status is now: NodeReady
	  Normal  Starting                 28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s (x8 over 28s)    kubelet          Node pause-589189 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s (x8 over 28s)    kubelet          Node pause-589189 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s (x8 over 28s)    kubelet          Node pause-589189 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6s                   node-controller  Node pause-589189 event: Registered Node pause-589189 in Controller
	
	
	==> dmesg <==
	[  +0.001189] FS-Cache: O-key=[8] 'ccd1c90000000000'
	[  +0.000818] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.001059] FS-Cache: N-cookie d=000000008497ac2d{9p.inode} n=00000000a750ea4f
	[  +0.001301] FS-Cache: N-key=[8] 'ccd1c90000000000'
	[  +0.014646] FS-Cache: Duplicate cookie detected
	[  +0.000925] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001115] FS-Cache: O-cookie d=000000008497ac2d{9p.inode} n=00000000f7d3da5e
	[  +0.001218] FS-Cache: O-key=[8] 'ccd1c90000000000'
	[  +0.000824] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.001156] FS-Cache: N-cookie d=000000008497ac2d{9p.inode} n=00000000bc524ce4
	[  +0.001241] FS-Cache: N-key=[8] 'ccd1c90000000000'
	[  +2.760106] FS-Cache: Duplicate cookie detected
	[  +0.000784] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001116] FS-Cache: O-cookie d=000000008497ac2d{9p.inode} n=00000000ca9fc0f7
	[  +0.001225] FS-Cache: O-key=[8] 'cbd1c90000000000'
	[  +0.000783] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001000] FS-Cache: N-cookie d=000000008497ac2d{9p.inode} n=000000003725d1cd
	[  +0.001192] FS-Cache: N-key=[8] 'cbd1c90000000000'
	[  +0.402621] FS-Cache: Duplicate cookie detected
	[  +0.000828] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001155] FS-Cache: O-cookie d=000000008497ac2d{9p.inode} n=00000000458cff56
	[  +0.001202] FS-Cache: O-key=[8] 'd1d1c90000000000'
	[  +0.000836] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001046] FS-Cache: N-cookie d=000000008497ac2d{9p.inode} n=00000000263e5b2a
	[  +0.001184] FS-Cache: N-key=[8] 'd1d1c90000000000'
	
	
	==> etcd [0e822a87ca22ecbd73e32a4bb31f706833c88c097de26c88b332a4865097c93f] <==
	{"level":"info","ts":"2024-01-03T20:36:54.861716Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-01-03T20:36:56.648071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-03T20:36:56.648119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-03T20:36:56.648154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-01-03T20:36:56.648168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2024-01-03T20:36:56.648175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-01-03T20:36:56.648185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2024-01-03T20:36:56.648192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-01-03T20:36:56.653071Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T20:36:56.653232Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T20:36:56.654217Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-01-03T20:36:56.654235Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-03T20:36:56.653075Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-589189 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-03T20:36:56.654595Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-03T20:36:56.654613Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-03T20:37:03.264134Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-01-03T20:37:03.26421Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-589189","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"warn","ts":"2024-01-03T20:37:03.264299Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-03T20:37:03.264324Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-03T20:37:03.265023Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-03T20:37:03.265052Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-01-03T20:37:03.265175Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2024-01-03T20:37:03.268595Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-01-03T20:37:03.26873Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-01-03T20:37:03.268741Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-589189","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [35e5b55bdfab72b4d4590efb9cc4298d7ec87ffe781d82e27efac9ad6779c9c9] <==
	{"level":"info","ts":"2024-01-03T20:37:08.919872Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2024-01-03T20:37:08.919959Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T20:37:08.919985Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-03T20:37:08.921833Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-03T20:37:08.921883Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-03T20:37:08.921892Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-03T20:37:08.984293Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-03T20:37:08.984533Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-03T20:37:08.984571Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-03T20:37:08.984671Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-01-03T20:37:08.98468Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-01-03T20:37:10.562891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2024-01-03T20:37:10.563024Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-01-03T20:37:10.563076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-01-03T20:37:10.563131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2024-01-03T20:37:10.563172Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2024-01-03T20:37:10.563213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2024-01-03T20:37:10.563248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2024-01-03T20:37:10.587519Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-589189 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-03T20:37:10.587756Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T20:37:10.588793Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-01-03T20:37:10.588907Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-03T20:37:10.589753Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-03T20:37:10.589863Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-03T20:37:10.589917Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:37:35 up  2:20,  0 users,  load average: 4.72, 3.03, 2.32
	Linux pause-589189 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [30eedc6a46c84a94e2c3c67e847250b4c693a24f38b24dc4c8d2ffe89907eb9c] <==
	I0103 20:37:18.160597       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0103 20:37:18.160918       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0103 20:37:18.161236       1 main.go:116] setting mtu 1500 for CNI 
	I0103 20:37:18.161303       1 main.go:146] kindnetd IP family: "ipv4"
	I0103 20:37:18.161361       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0103 20:37:18.494391       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0103 20:37:18.494432       1 main.go:227] handling current node
	I0103 20:37:28.512853       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0103 20:37:28.512975       1 main.go:227] handling current node
	
	
	==> kindnet [78f150ce8a96bcfa7c3d36b4aab3838e2634c219621756b6d4ce3911e540b5a8] <==
	
	
	==> kube-apiserver [90f6983570911e6942b79abe8d42b3e6b272a1e14ccc3350a0f57594e7112913] <==
	I0103 20:37:16.605057       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0103 20:37:16.605065       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0103 20:37:16.605072       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0103 20:37:16.605080       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0103 20:37:16.634454       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0103 20:37:16.927717       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0103 20:37:16.931522       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0103 20:37:16.932261       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0103 20:37:16.932333       1 shared_informer.go:318] Caches are synced for configmaps
	I0103 20:37:16.934622       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0103 20:37:16.935497       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0103 20:37:16.935601       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0103 20:37:16.952485       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0103 20:37:16.961776       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0103 20:37:16.963892       1 aggregator.go:166] initial CRD sync complete...
	I0103 20:37:16.964008       1 autoregister_controller.go:141] Starting autoregister controller
	I0103 20:37:16.964045       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0103 20:37:16.964117       1 cache.go:39] Caches are synced for autoregister controller
	E0103 20:37:17.021983       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0103 20:37:17.408837       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0103 20:37:19.924072       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0103 20:37:20.076961       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0103 20:37:20.097865       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0103 20:37:20.179540       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0103 20:37:20.188593       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [fe332dccccd3cfdf9eb22fda618573d4ce50f4b9a7d78741ca394f0248e4a1bb] <==
	
	
	==> kube-controller-manager [48229eac57ee1c0f3d9f6bc28113a1e66b5ba7e3490f1f786830c924316737c5] <==
	
	
	==> kube-controller-manager [cbeda173effd84e4da29608075847c35a44e01ec35994c85262d3c83288eb00b] <==
	I0103 20:37:29.583526       1 shared_informer.go:318] Caches are synced for service account
	I0103 20:37:29.583609       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0103 20:37:29.583665       1 taint_manager.go:210] "Sending events to api server"
	I0103 20:37:29.584380       1 event.go:307] "Event occurred" object="pause-589189" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-589189 event: Registered Node pause-589189 in Controller"
	I0103 20:37:29.588547       1 shared_informer.go:318] Caches are synced for persistent volume
	I0103 20:37:29.592150       1 shared_informer.go:318] Caches are synced for PV protection
	I0103 20:37:29.593351       1 shared_informer.go:318] Caches are synced for daemon sets
	I0103 20:37:29.594649       1 shared_informer.go:318] Caches are synced for endpoint
	I0103 20:37:29.596280       1 shared_informer.go:318] Caches are synced for GC
	I0103 20:37:29.598950       1 shared_informer.go:318] Caches are synced for deployment
	I0103 20:37:29.602274       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0103 20:37:29.602490       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.66µs"
	I0103 20:37:29.608352       1 shared_informer.go:318] Caches are synced for expand
	I0103 20:37:29.608423       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0103 20:37:29.618107       1 shared_informer.go:318] Caches are synced for PVC protection
	I0103 20:37:29.629739       1 shared_informer.go:318] Caches are synced for HPA
	I0103 20:37:29.632699       1 shared_informer.go:318] Caches are synced for stateful set
	I0103 20:37:29.673230       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0103 20:37:29.676410       1 shared_informer.go:318] Caches are synced for resource quota
	I0103 20:37:29.676528       1 shared_informer.go:318] Caches are synced for job
	I0103 20:37:29.743878       1 shared_informer.go:318] Caches are synced for resource quota
	I0103 20:37:29.763945       1 shared_informer.go:318] Caches are synced for cronjob
	I0103 20:37:30.090558       1 shared_informer.go:318] Caches are synced for garbage collector
	I0103 20:37:30.090682       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0103 20:37:30.154392       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-proxy [85eb04634a23fe2f0cabe61dd973cbc0b3efaf531040055e52ee92f023d47bfe] <==
	
	
	==> kube-proxy [ebd980d6d02c4d4ad41835e2d47eb8c29c0cfbfe84c3712dc680224e5bd93e89] <==
	I0103 20:37:18.400422       1 server_others.go:69] "Using iptables proxy"
	I0103 20:37:18.424742       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0103 20:37:18.548458       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0103 20:37:18.551484       1 server_others.go:152] "Using iptables Proxier"
	I0103 20:37:18.551625       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0103 20:37:18.551663       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0103 20:37:18.551770       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0103 20:37:18.552056       1 server.go:846] "Version info" version="v1.28.4"
	I0103 20:37:18.552325       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 20:37:18.553266       1 config.go:188] "Starting service config controller"
	I0103 20:37:18.553556       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0103 20:37:18.553629       1 config.go:97] "Starting endpoint slice config controller"
	I0103 20:37:18.553669       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0103 20:37:18.554403       1 config.go:315] "Starting node config controller"
	I0103 20:37:18.554474       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0103 20:37:18.655671       1 shared_informer.go:318] Caches are synced for node config
	I0103 20:37:18.655768       1 shared_informer.go:318] Caches are synced for service config
	I0103 20:37:18.655795       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0b4de2b22c7b4724729f6e7fb8c0e390ff80431329c90ae0cc8d0a069bcdfb8d] <==
	
	
	==> kube-scheduler [8aa0a9ca18033bc8d38f5704405cf21ef9273e00c0d4de80c1e533396758ab3f] <==
	I0103 20:37:13.918951       1 serving.go:348] Generated self-signed cert in-memory
	W0103 20:37:16.780405       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0103 20:37:16.780522       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0103 20:37:16.780558       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0103 20:37:16.780616       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0103 20:37:16.901396       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0103 20:37:16.901515       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0103 20:37:16.908686       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0103 20:37:16.908813       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0103 20:37:16.911538       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0103 20:37:16.911674       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0103 20:37:17.012955       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 03 20:37:08 pause-589189 kubelet[2903]: E0103 20:37:08.599027    2903 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Jan 03 20:37:08 pause-589189 kubelet[2903]: W0103 20:37:08.690506    2903 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Jan 03 20:37:08 pause-589189 kubelet[2903]: E0103 20:37:08.690833    2903 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Jan 03 20:37:08 pause-589189 kubelet[2903]: W0103 20:37:08.823229    2903 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Jan 03 20:37:08 pause-589189 kubelet[2903]: E0103 20:37:08.823290    2903 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Jan 03 20:37:08 pause-589189 kubelet[2903]: W0103 20:37:08.831957    2903 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-589189&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Jan 03 20:37:08 pause-589189 kubelet[2903]: E0103 20:37:08.832022    2903 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-589189&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Jan 03 20:37:09 pause-589189 kubelet[2903]: I0103 20:37:09.041337    2903 kubelet_node_status.go:70] "Attempting to register node" node="pause-589189"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.000178    2903 kubelet_node_status.go:108] "Node was previously registered" node="pause-589189"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.000282    2903 kubelet_node_status.go:73] "Successfully registered node" node="pause-589189"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.002707    2903 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.003590    2903 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.460753    2903 apiserver.go:52] "Watching apiserver"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.469084    2903 topology_manager.go:215] "Topology Admit Handler" podUID="16420a9b-d68e-4a16-84d7-e6344f3b9f27" podNamespace="kube-system" podName="kindnet-xh476"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.469223    2903 topology_manager.go:215] "Topology Admit Handler" podUID="a55774e9-f310-4e29-be2d-81f71022a59b" podNamespace="kube-system" podName="kube-proxy-qptr2"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.469274    2903 topology_manager.go:215] "Topology Admit Handler" podUID="88099227-8f36-44d9-b01c-1d8a5fca054a" podNamespace="kube-system" podName="coredns-5dd5756b68-q766p"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.513351    2903 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.610004    2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16420a9b-d68e-4a16-84d7-e6344f3b9f27-lib-modules\") pod \"kindnet-xh476\" (UID: \"16420a9b-d68e-4a16-84d7-e6344f3b9f27\") " pod="kube-system/kindnet-xh476"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.610078    2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a55774e9-f310-4e29-be2d-81f71022a59b-xtables-lock\") pod \"kube-proxy-qptr2\" (UID: \"a55774e9-f310-4e29-be2d-81f71022a59b\") " pod="kube-system/kube-proxy-qptr2"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.610115    2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/16420a9b-d68e-4a16-84d7-e6344f3b9f27-cni-cfg\") pod \"kindnet-xh476\" (UID: \"16420a9b-d68e-4a16-84d7-e6344f3b9f27\") " pod="kube-system/kindnet-xh476"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.610150    2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16420a9b-d68e-4a16-84d7-e6344f3b9f27-xtables-lock\") pod \"kindnet-xh476\" (UID: \"16420a9b-d68e-4a16-84d7-e6344f3b9f27\") " pod="kube-system/kindnet-xh476"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.610178    2903 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a55774e9-f310-4e29-be2d-81f71022a59b-lib-modules\") pod \"kube-proxy-qptr2\" (UID: \"a55774e9-f310-4e29-be2d-81f71022a59b\") " pod="kube-system/kube-proxy-qptr2"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.770653    2903 scope.go:117] "RemoveContainer" containerID="d22ad169c567e2011ce0d9196523e4a33db859f3ebadfc4f50c24d4372357ae0"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.771776    2903 scope.go:117] "RemoveContainer" containerID="78f150ce8a96bcfa7c3d36b4aab3838e2634c219621756b6d4ce3911e540b5a8"
	Jan 03 20:37:17 pause-589189 kubelet[2903]: I0103 20:37:17.772173    2903 scope.go:117] "RemoveContainer" containerID="85eb04634a23fe2f0cabe61dd973cbc0b3efaf531040055e52ee92f023d47bfe"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-589189 -n pause-589189
helpers_test.go:261: (dbg) Run:  kubectl --context pause-589189 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (54.80s)

                                                
                                    

Test pass (271/310)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 13.51
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.1
10 TestDownloadOnly/v1.28.4/json-events 16.83
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.09
17 TestDownloadOnly/v1.29.0-rc.2/json-events 22.45
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.09
23 TestDownloadOnly/DeleteAll 0.24
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.16
26 TestBinaryMirror 0.63
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
32 TestAddons/Setup 173.81
34 TestAddons/parallel/Registry 16.94
36 TestAddons/parallel/InspektorGadget 12.21
37 TestAddons/parallel/MetricsServer 6.83
40 TestAddons/parallel/CSI 45.19
41 TestAddons/parallel/Headlamp 12.7
42 TestAddons/parallel/CloudSpanner 5.71
43 TestAddons/parallel/LocalPath 52.75
44 TestAddons/parallel/NvidiaDevicePlugin 6.65
45 TestAddons/parallel/Yakd 6
48 TestAddons/serial/GCPAuth/Namespaces 0.2
49 TestAddons/StoppedEnableDisable 12.31
50 TestCertOptions 34.57
51 TestCertExpiration 246.17
53 TestForceSystemdFlag 41.64
54 TestForceSystemdEnv 42.9
60 TestErrorSpam/setup 29.16
61 TestErrorSpam/start 0.88
62 TestErrorSpam/status 1.18
63 TestErrorSpam/pause 1.96
64 TestErrorSpam/unpause 2
65 TestErrorSpam/stop 1.48
68 TestFunctional/serial/CopySyncFile 0
69 TestFunctional/serial/StartWithProxy 72.8
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 34.9
72 TestFunctional/serial/KubeContext 0.07
73 TestFunctional/serial/KubectlGetPods 0.11
76 TestFunctional/serial/CacheCmd/cache/add_remote 4.27
77 TestFunctional/serial/CacheCmd/cache/add_local 1.12
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
79 TestFunctional/serial/CacheCmd/cache/list 0.07
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
81 TestFunctional/serial/CacheCmd/cache/cache_reload 2.21
82 TestFunctional/serial/CacheCmd/cache/delete 0.15
83 TestFunctional/serial/MinikubeKubectlCmd 0.16
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.18
85 TestFunctional/serial/ExtraConfig 35.27
86 TestFunctional/serial/ComponentHealth 0.13
87 TestFunctional/serial/LogsCmd 1.89
88 TestFunctional/serial/LogsFileCmd 1.93
89 TestFunctional/serial/InvalidService 4.5
91 TestFunctional/parallel/ConfigCmd 0.61
92 TestFunctional/parallel/DashboardCmd 10.58
93 TestFunctional/parallel/DryRun 0.66
94 TestFunctional/parallel/InternationalLanguage 0.27
95 TestFunctional/parallel/StatusCmd 1.34
99 TestFunctional/parallel/ServiceCmdConnect 10.78
100 TestFunctional/parallel/AddonsCmd 0.28
101 TestFunctional/parallel/PersistentVolumeClaim 25.95
103 TestFunctional/parallel/SSHCmd 0.83
104 TestFunctional/parallel/CpCmd 2.67
106 TestFunctional/parallel/FileSync 0.48
107 TestFunctional/parallel/CertSync 2.46
111 TestFunctional/parallel/NodeLabels 0.17
113 TestFunctional/parallel/NonActiveRuntimeDisabled 0.88
115 TestFunctional/parallel/License 0.34
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.78
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.56
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.14
122 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
126 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.21
127 TestFunctional/parallel/ServiceCmd/DeployApp 7.23
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
129 TestFunctional/parallel/ProfileCmd/profile_list 0.44
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
131 TestFunctional/parallel/MountCmd/any-port 8.59
132 TestFunctional/parallel/ServiceCmd/List 0.68
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.59
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
135 TestFunctional/parallel/ServiceCmd/Format 0.44
136 TestFunctional/parallel/ServiceCmd/URL 0.47
137 TestFunctional/parallel/MountCmd/specific-port 1.54
138 TestFunctional/parallel/MountCmd/VerifyCleanup 3.45
139 TestFunctional/parallel/Version/short 0.11
140 TestFunctional/parallel/Version/components 1.56
141 TestFunctional/parallel/ImageCommands/ImageListShort 0.36
142 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
143 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
144 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
145 TestFunctional/parallel/ImageCommands/ImageBuild 3.24
146 TestFunctional/parallel/ImageCommands/Setup 2.75
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.47
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.28
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.27
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.27
151 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.93
152 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.49
153 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.93
154 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
155 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.31
156 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.98
157 TestFunctional/delete_addon-resizer_images 0.09
158 TestFunctional/delete_my-image_image 0.02
159 TestFunctional/delete_minikube_cached_images 0.02
163 TestIngressAddonLegacy/StartLegacyK8sCluster 89.29
165 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 12.48
166 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.69
170 TestJSONOutput/start/Command 52.31
171 TestJSONOutput/start/Audit 0
173 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/pause/Command 0.85
177 TestJSONOutput/pause/Audit 0
179 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/unpause/Command 1.02
183 TestJSONOutput/unpause/Audit 0
185 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/stop/Command 6
189 TestJSONOutput/stop/Audit 0
191 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
193 TestErrorJSONOutput 0.27
195 TestKicCustomNetwork/create_custom_network 44.01
196 TestKicCustomNetwork/use_default_bridge_network 35.66
197 TestKicExistingNetwork 35.15
198 TestKicCustomSubnet 36.3
199 TestKicStaticIP 35.12
200 TestMainNoArgs 0.08
201 TestMinikubeProfile 70.03
204 TestMountStart/serial/StartWithMountFirst 7.9
205 TestMountStart/serial/VerifyMountFirst 0.3
206 TestMountStart/serial/StartWithMountSecond 8.12
207 TestMountStart/serial/VerifyMountSecond 0.31
208 TestMountStart/serial/DeleteFirst 1.78
209 TestMountStart/serial/VerifyMountPostDelete 0.31
210 TestMountStart/serial/Stop 1.23
211 TestMountStart/serial/RestartStopped 7.99
212 TestMountStart/serial/VerifyMountPostStop 0.31
215 TestMultiNode/serial/FreshStart2Nodes 124.89
216 TestMultiNode/serial/DeployApp2Nodes 5.41
218 TestMultiNode/serial/AddNode 50.58
219 TestMultiNode/serial/MultiNodeLabels 0.12
220 TestMultiNode/serial/ProfileList 0.37
221 TestMultiNode/serial/CopyFile 11.52
222 TestMultiNode/serial/StopNode 2.39
223 TestMultiNode/serial/StartAfterStop 13.2
224 TestMultiNode/serial/RestartKeepsNodes 121.77
225 TestMultiNode/serial/DeleteNode 5.25
226 TestMultiNode/serial/StopMultiNode 24.06
227 TestMultiNode/serial/RestartMultiNode 79.59
228 TestMultiNode/serial/ValidateNameConflict 35.4
233 TestPreload 180.38
235 TestScheduledStopUnix 110.72
238 TestInsufficientStorage 13.6
241 TestKubernetesUpgrade 418.23
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
245 TestNoKubernetes/serial/StartWithK8s 42.17
246 TestNoKubernetes/serial/StartWithStopK8s 8.45
247 TestNoKubernetes/serial/Start 9.68
248 TestNoKubernetes/serial/VerifyK8sNotRunning 0.35
249 TestNoKubernetes/serial/ProfileList 1.19
250 TestNoKubernetes/serial/Stop 1.37
251 TestNoKubernetes/serial/StartNoArgs 7.77
252 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
253 TestStoppedBinaryUpgrade/Setup 1.37
255 TestStoppedBinaryUpgrade/MinikubeLogs 0.76
264 TestPause/serial/Start 83.67
273 TestNetworkPlugins/group/false 6.43
278 TestStartStop/group/old-k8s-version/serial/FirstStart 121.69
279 TestStartStop/group/old-k8s-version/serial/DeployApp 10.54
280 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.09
281 TestStartStop/group/old-k8s-version/serial/Stop 12.02
282 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
283 TestStartStop/group/old-k8s-version/serial/SecondStart 439.36
285 TestStartStop/group/no-preload/serial/FirstStart 67.84
286 TestStartStop/group/no-preload/serial/DeployApp 9.38
287 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.17
288 TestStartStop/group/no-preload/serial/Stop 12.06
289 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
290 TestStartStop/group/no-preload/serial/SecondStart 624.57
291 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
292 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.14
293 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
294 TestStartStop/group/old-k8s-version/serial/Pause 3.76
296 TestStartStop/group/embed-certs/serial/FirstStart 82.7
297 TestStartStop/group/embed-certs/serial/DeployApp 10.4
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
299 TestStartStop/group/embed-certs/serial/Stop 12.06
300 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
301 TestStartStop/group/embed-certs/serial/SecondStart 352.74
302 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
303 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
304 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
305 TestStartStop/group/no-preload/serial/Pause 3.52
307 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 77.66
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.4
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.24
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.05
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
312 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 605.3
313 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.01
314 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
315 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
316 TestStartStop/group/embed-certs/serial/Pause 3.52
318 TestStartStop/group/newest-cni/serial/FirstStart 47.74
319 TestStartStop/group/newest-cni/serial/DeployApp 0
320 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.01
321 TestStartStop/group/newest-cni/serial/Stop 1.31
322 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
323 TestStartStop/group/newest-cni/serial/SecondStart 30.18
324 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
325 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
326 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
327 TestStartStop/group/newest-cni/serial/Pause 3.36
328 TestNetworkPlugins/group/auto/Start 74.21
329 TestNetworkPlugins/group/auto/KubeletFlags 0.51
330 TestNetworkPlugins/group/auto/NetCatPod 10.35
331 TestNetworkPlugins/group/auto/DNS 0.2
332 TestNetworkPlugins/group/auto/Localhost 0.19
333 TestNetworkPlugins/group/auto/HairPin 0.19
334 TestNetworkPlugins/group/flannel/Start 62.93
335 TestNetworkPlugins/group/flannel/ControllerPod 6.01
336 TestNetworkPlugins/group/flannel/KubeletFlags 0.36
337 TestNetworkPlugins/group/flannel/NetCatPod 11.31
338 TestNetworkPlugins/group/flannel/DNS 0.25
339 TestNetworkPlugins/group/flannel/Localhost 0.23
340 TestNetworkPlugins/group/flannel/HairPin 0.23
341 TestNetworkPlugins/group/calico/Start 69.84
342 TestNetworkPlugins/group/calico/ControllerPod 6.01
343 TestNetworkPlugins/group/calico/KubeletFlags 0.37
344 TestNetworkPlugins/group/calico/NetCatPod 11.3
345 TestNetworkPlugins/group/calico/DNS 0.23
346 TestNetworkPlugins/group/calico/Localhost 0.19
347 TestNetworkPlugins/group/calico/HairPin 0.19
348 TestNetworkPlugins/group/custom-flannel/Start 66.39
349 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
350 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.31
351 TestNetworkPlugins/group/custom-flannel/DNS 0.27
352 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
353 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
354 TestNetworkPlugins/group/kindnet/Start 75.83
355 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
356 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.14
357 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.42
358 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.52
359 TestNetworkPlugins/group/bridge/Start 85.34
360 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
361 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
362 TestNetworkPlugins/group/kindnet/NetCatPod 11.26
363 TestNetworkPlugins/group/kindnet/DNS 0.2
364 TestNetworkPlugins/group/kindnet/Localhost 0.18
365 TestNetworkPlugins/group/kindnet/HairPin 0.19
366 TestNetworkPlugins/group/enable-default-cni/Start 93.06
367 TestNetworkPlugins/group/bridge/KubeletFlags 0.45
368 TestNetworkPlugins/group/bridge/NetCatPod 12.38
369 TestNetworkPlugins/group/bridge/DNS 0.22
370 TestNetworkPlugins/group/bridge/Localhost 0.22
371 TestNetworkPlugins/group/bridge/HairPin 0.2
372 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
373 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.26
374 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
375 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
376 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
x
+
TestDownloadOnly/v1.16.0/json-events (13.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-684862 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-684862 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.512919712s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (13.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-684862
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-684862: exit status 85 (98.693689ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-684862 | jenkins | v1.32.0 | 03 Jan 24 19:52 UTC |          |
	|         | -p download-only-684862        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 19:52:13
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 19:52:13.479256  414769 out.go:296] Setting OutFile to fd 1 ...
	I0103 19:52:13.479502  414769 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:52:13.479527  414769 out.go:309] Setting ErrFile to fd 2...
	I0103 19:52:13.479546  414769 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:52:13.479835  414769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-409390/.minikube/bin
	W0103 19:52:13.480024  414769 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17885-409390/.minikube/config/config.json: open /home/jenkins/minikube-integration/17885-409390/.minikube/config/config.json: no such file or directory
	I0103 19:52:13.480559  414769 out.go:303] Setting JSON to true
	I0103 19:52:13.481474  414769 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":5683,"bootTime":1704305851,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0103 19:52:13.481585  414769 start.go:138] virtualization:  
	I0103 19:52:13.484608  414769 out.go:97] [download-only-684862] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0103 19:52:13.487038  414769 out.go:169] MINIKUBE_LOCATION=17885
	W0103 19:52:13.484920  414769 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball: no such file or directory
	I0103 19:52:13.484971  414769 notify.go:220] Checking for updates...
	I0103 19:52:13.490764  414769 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 19:52:13.493019  414769 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 19:52:13.495073  414769 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube
	I0103 19:52:13.497023  414769 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0103 19:52:13.500424  414769 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0103 19:52:13.500724  414769 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 19:52:13.524892  414769 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 19:52:13.524985  414769 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 19:52:13.612802  414769 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-01-03 19:52:13.602845918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 19:52:13.612911  414769 docker.go:295] overlay module found
	I0103 19:52:13.615121  414769 out.go:97] Using the docker driver based on user configuration
	I0103 19:52:13.615153  414769 start.go:298] selected driver: docker
	I0103 19:52:13.615160  414769 start.go:902] validating driver "docker" against <nil>
	I0103 19:52:13.615276  414769 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 19:52:13.690062  414769 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-01-03 19:52:13.680305143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 19:52:13.690221  414769 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0103 19:52:13.690562  414769 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0103 19:52:13.690745  414769 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0103 19:52:13.693040  414769 out.go:169] Using Docker driver with root privileges
	I0103 19:52:13.694921  414769 cni.go:84] Creating CNI manager for ""
	I0103 19:52:13.694945  414769 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 19:52:13.694964  414769 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0103 19:52:13.694975  414769 start_flags.go:323] config:
	{Name:download-only-684862 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-684862 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:52:13.697000  414769 out.go:97] Starting control plane node download-only-684862 in cluster download-only-684862
	I0103 19:52:13.697019  414769 cache.go:121] Beginning downloading kic base image for docker with crio
	I0103 19:52:13.698829  414769 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I0103 19:52:13.698856  414769 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0103 19:52:13.699006  414769 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 19:52:13.717477  414769 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0103 19:52:13.717665  414769 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I0103 19:52:13.717775  414769 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0103 19:52:13.771967  414769 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0103 19:52:13.771993  414769 cache.go:56] Caching tarball of preloaded images
	I0103 19:52:13.772183  414769 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0103 19:52:13.774675  414769 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0103 19:52:13.774705  414769 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0103 19:52:13.894195  414769 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0103 19:52:21.932504  414769 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-684862"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (16.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-684862 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-684862 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (16.830081437s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (16.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-684862
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-684862: exit status 85 (88.499781ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-684862 | jenkins | v1.32.0 | 03 Jan 24 19:52 UTC |          |
	|         | -p download-only-684862        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-684862 | jenkins | v1.32.0 | 03 Jan 24 19:52 UTC |          |
	|         | -p download-only-684862        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 19:52:27
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 19:52:27.091105  414847 out.go:296] Setting OutFile to fd 1 ...
	I0103 19:52:27.091339  414847 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:52:27.091354  414847 out.go:309] Setting ErrFile to fd 2...
	I0103 19:52:27.091361  414847 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:52:27.091728  414847 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-409390/.minikube/bin
	W0103 19:52:27.091891  414847 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17885-409390/.minikube/config/config.json: open /home/jenkins/minikube-integration/17885-409390/.minikube/config/config.json: no such file or directory
	I0103 19:52:27.092225  414847 out.go:303] Setting JSON to true
	I0103 19:52:27.093200  414847 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":5697,"bootTime":1704305851,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0103 19:52:27.093291  414847 start.go:138] virtualization:  
	I0103 19:52:27.095605  414847 out.go:97] [download-only-684862] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0103 19:52:27.095963  414847 notify.go:220] Checking for updates...
	I0103 19:52:27.098773  414847 out.go:169] MINIKUBE_LOCATION=17885
	I0103 19:52:27.101009  414847 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 19:52:27.102868  414847 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 19:52:27.104705  414847 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube
	I0103 19:52:27.106225  414847 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0103 19:52:27.109778  414847 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0103 19:52:27.110299  414847 config.go:182] Loaded profile config "download-only-684862": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0103 19:52:27.110350  414847 start.go:810] api.Load failed for download-only-684862: filestore "download-only-684862": Docker machine "download-only-684862" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0103 19:52:27.110458  414847 driver.go:392] Setting default libvirt URI to qemu:///system
	W0103 19:52:27.110490  414847 start.go:810] api.Load failed for download-only-684862: filestore "download-only-684862": Docker machine "download-only-684862" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0103 19:52:27.135289  414847 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 19:52:27.135395  414847 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 19:52:27.233037  414847 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-03 19:52:27.222237246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 19:52:27.233150  414847 docker.go:295] overlay module found
	I0103 19:52:27.234931  414847 out.go:97] Using the docker driver based on existing profile
	I0103 19:52:27.234958  414847 start.go:298] selected driver: docker
	I0103 19:52:27.234965  414847 start.go:902] validating driver "docker" against &{Name:download-only-684862 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-684862 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:52:27.235152  414847 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 19:52:27.309436  414847 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-03 19:52:27.300262404 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 19:52:27.309922  414847 cni.go:84] Creating CNI manager for ""
	I0103 19:52:27.309941  414847 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 19:52:27.309953  414847 start_flags.go:323] config:
	{Name:download-only-684862 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-684862 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I0103 19:52:27.311748  414847 out.go:97] Starting control plane node download-only-684862 in cluster download-only-684862
	I0103 19:52:27.311770  414847 cache.go:121] Beginning downloading kic base image for docker with crio
	I0103 19:52:27.313323  414847 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I0103 19:52:27.313347  414847 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 19:52:27.313451  414847 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 19:52:27.334794  414847 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0103 19:52:27.334950  414847 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I0103 19:52:27.334975  414847 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory, skipping pull
	I0103 19:52:27.334985  414847 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in cache, skipping pull
	I0103 19:52:27.334993  414847 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I0103 19:52:27.381726  414847 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0103 19:52:27.381756  414847 cache.go:56] Caching tarball of preloaded images
	I0103 19:52:27.381924  414847 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0103 19:52:27.383711  414847 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0103 19:52:27.383734  414847 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 ...
	I0103 19:52:27.497281  414847 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4?checksum=md5:23e2271fd1a7b32f52ce36ae8363c081 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-684862"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (22.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-684862 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-684862 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (22.454143307s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (22.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-684862
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-684862: exit status 85 (93.7921ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-684862 | jenkins | v1.32.0 | 03 Jan 24 19:52 UTC |          |
	|         | -p download-only-684862           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-684862 | jenkins | v1.32.0 | 03 Jan 24 19:52 UTC |          |
	|         | -p download-only-684862           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-684862 | jenkins | v1.32.0 | 03 Jan 24 19:52 UTC |          |
	|         | -p download-only-684862           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/03 19:52:44
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0103 19:52:44.011681  414922 out.go:296] Setting OutFile to fd 1 ...
	I0103 19:52:44.011960  414922 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:52:44.011986  414922 out.go:309] Setting ErrFile to fd 2...
	I0103 19:52:44.012009  414922 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 19:52:44.012351  414922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-409390/.minikube/bin
	W0103 19:52:44.012553  414922 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17885-409390/.minikube/config/config.json: open /home/jenkins/minikube-integration/17885-409390/.minikube/config/config.json: no such file or directory
	I0103 19:52:44.012958  414922 out.go:303] Setting JSON to true
	I0103 19:52:44.013995  414922 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":5713,"bootTime":1704305851,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0103 19:52:44.014234  414922 start.go:138] virtualization:  
	I0103 19:52:44.017079  414922 out.go:97] [download-only-684862] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0103 19:52:44.017298  414922 notify.go:220] Checking for updates...
	I0103 19:52:44.019976  414922 out.go:169] MINIKUBE_LOCATION=17885
	I0103 19:52:44.022205  414922 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 19:52:44.024313  414922 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 19:52:44.026908  414922 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube
	I0103 19:52:44.029211  414922 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0103 19:52:44.034287  414922 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0103 19:52:44.034908  414922 config.go:182] Loaded profile config "download-only-684862": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W0103 19:52:44.034983  414922 start.go:810] api.Load failed for download-only-684862: filestore "download-only-684862": Docker machine "download-only-684862" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0103 19:52:44.035104  414922 driver.go:392] Setting default libvirt URI to qemu:///system
	W0103 19:52:44.035137  414922 start.go:810] api.Load failed for download-only-684862: filestore "download-only-684862": Docker machine "download-only-684862" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0103 19:52:44.059223  414922 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 19:52:44.059387  414922 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 19:52:44.143771  414922 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-03 19:52:44.132832327 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 19:52:44.143887  414922 docker.go:295] overlay module found
	I0103 19:52:44.145896  414922 out.go:97] Using the docker driver based on existing profile
	I0103 19:52:44.145927  414922 start.go:298] selected driver: docker
	I0103 19:52:44.145934  414922 start.go:902] validating driver "docker" against &{Name:download-only-684862 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-684862 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 19:52:44.146127  414922 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 19:52:44.212845  414922 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-03 19:52:44.203862746 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 19:52:44.213314  414922 cni.go:84] Creating CNI manager for ""
	I0103 19:52:44.213336  414922 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0103 19:52:44.213350  414922 start_flags.go:323] config:
	{Name:download-only-684862 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-684862 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0
s GPUs:}
	I0103 19:52:44.215147  414922 out.go:97] Starting control plane node download-only-684862 in cluster download-only-684862
	I0103 19:52:44.215168  414922 cache.go:121] Beginning downloading kic base image for docker with crio
	I0103 19:52:44.216713  414922 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I0103 19:52:44.216741  414922 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0103 19:52:44.216916  414922 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I0103 19:52:44.234085  414922 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I0103 19:52:44.234286  414922 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I0103 19:52:44.234311  414922 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory, skipping pull
	I0103 19:52:44.234320  414922 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in cache, skipping pull
	I0103 19:52:44.234328  414922 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I0103 19:52:44.303031  414922 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I0103 19:52:44.303069  414922 cache.go:56] Caching tarball of preloaded images
	I0103 19:52:44.303249  414922 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0103 19:52:44.305218  414922 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0103 19:52:44.305243  414922 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I0103 19:52:44.428659  414922 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:307124b87428587d9288b24ec2db2592 -> /home/jenkins/minikube-integration/17885-409390/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-684862"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-684862
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-868924 --alsologtostderr --binary-mirror http://127.0.0.1:45073 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-868924" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-868924
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-845596
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-845596: exit status 85 (88.159244ms)

                                                
                                                
-- stdout --
	* Profile "addons-845596" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-845596"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-845596
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-845596: exit status 85 (81.445524ms)

                                                
                                                
-- stdout --
	* Profile "addons-845596" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-845596"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (173.81s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-845596 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-845596 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m53.811228037s)
--- PASS: TestAddons/Setup (173.81s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 57.324671ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-hw4tv" [0a9a5b31-9d9e-49dd-aa9d-06cb07d586af] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005609689s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-hlftp" [8f750323-8b7d-46c9-b468-bf0deea921d1] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005705411s
addons_test.go:340: (dbg) Run:  kubectl --context addons-845596 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-845596 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-845596 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.667953081s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-845596 ip
2024/01/03 19:56:18 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-845596 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.94s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.21s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6hdkj" [1d42e432-3ee5-4a3c-b612-b7988493ef0c] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004278097s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-845596
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-845596: (6.201984091s)
--- PASS: TestAddons/parallel/InspektorGadget (12.21s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 10.030665ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-rhh5h" [9c2fc839-4c16-4364-aab4-4d2c62c7b4d5] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00507562s
addons_test.go:415: (dbg) Run:  kubectl --context addons-845596 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-845596 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.19s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 9.77793ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-845596 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-845596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-845596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-845596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-845596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-845596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-845596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-845596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-845596 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-845596 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b8c9d372-b54c-4d6f-b321-0e10d4d1acaf] Pending
helpers_test.go:344: "task-pv-pod" [b8c9d372-b54c-4d6f-b321-0e10d4d1acaf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b8c9d372-b54c-4d6f-b321-0e10d4d1acaf] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.004262484s
addons_test.go:584: (dbg) Run:  kubectl --context addons-845596 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-845596 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-845596 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-845596 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-845596 delete pod task-pv-pod: (1.129509225s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-845596 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-845596 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-845596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-845596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-845596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-845596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-845596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-845596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-845596 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [dd4ad49f-0c23-44a4-a469-5e3251f43cf9] Pending
helpers_test.go:344: "task-pv-pod-restore" [dd4ad49f-0c23-44a4-a469-5e3251f43cf9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [dd4ad49f-0c23-44a4-a469-5e3251f43cf9] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005287915s
addons_test.go:626: (dbg) Run:  kubectl --context addons-845596 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-845596 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-845596 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-845596 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-845596 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.813410292s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-845596 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (45.19s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-845596 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-845596 --alsologtostderr -v=1: (1.688304177s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-8bc2f" [f014b4b4-0fbe-4b57-bc90-d44d1412e2e8] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-8bc2f" [f014b4b4-0fbe-4b57-bc90-d44d1412e2e8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-8bc2f" [f014b4b4-0fbe-4b57-bc90-d44d1412e2e8] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004659712s
--- PASS: TestAddons/parallel/Headlamp (12.70s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-kc4z9" [b729946c-264f-4a3d-ab12-c754f4db7c32] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003939956s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-845596
--- PASS: TestAddons/parallel/CloudSpanner (5.71s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.75s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-845596 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-845596 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-845596 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-845596 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-845596 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-845596 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-845596 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-845596 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3786b64c-e939-4500-8fde-32482351dd1c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3786b64c-e939-4500-8fde-32482351dd1c] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3786b64c-e939-4500-8fde-32482351dd1c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003820984s
addons_test.go:891: (dbg) Run:  kubectl --context addons-845596 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-845596 ssh "cat /opt/local-path-provisioner/pvc-2ad071b6-0e4d-454a-9eea-120e6c6d57fe_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-845596 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-845596 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-845596 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-845596 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.399049324s)
--- PASS: TestAddons/parallel/LocalPath (52.75s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.65s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-jv75d" [d6c4dc1b-e6f6-4016-8a0c-c8156e31df4c] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006148266s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-845596
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.65s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-j4krk" [1919cd0c-387a-49ce-ba3d-12ed61bcc89b] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003556417s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-845596 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-845596 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-845596
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-845596: (11.975053181s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-845596
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-845596
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-845596
--- PASS: TestAddons/StoppedEnableDisable (12.31s)

                                                
                                    
x
+
TestCertOptions (34.57s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-776585 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-776585 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (31.75757161s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-776585 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-776585 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-776585 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-776585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-776585
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-776585: (2.067335666s)
--- PASS: TestCertOptions (34.57s)

                                                
                                    
x
+
TestCertExpiration (246.17s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-652377 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-652377 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (41.014007319s)
E0103 20:39:11.513924  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-652377 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E0103 20:42:04.568575  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-652377 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (22.647364878s)
helpers_test.go:175: Cleaning up "cert-expiration-652377" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-652377
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-652377: (2.503281979s)
--- PASS: TestCertExpiration (246.17s)

                                                
                                    
x
+
TestForceSystemdFlag (41.64s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-518436 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-518436 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.093092503s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-518436 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-518436" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-518436
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-518436: (3.128964307s)
--- PASS: TestForceSystemdFlag (41.64s)

                                                
                                    
x
+
TestForceSystemdEnv (42.9s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-642559 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-642559 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.087813879s)
helpers_test.go:175: Cleaning up "force-systemd-env-642559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-642559
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-642559: (2.810056471s)
--- PASS: TestForceSystemdEnv (42.90s)

                                                
                                    
x
+
TestErrorSpam/setup (29.16s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-776813 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-776813 --driver=docker  --container-runtime=crio
E0103 20:01:02.283084  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
E0103 20:01:02.290552  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
E0103 20:01:02.300830  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
E0103 20:01:02.321092  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
E0103 20:01:02.361357  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
E0103 20:01:02.441670  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
E0103 20:01:02.602021  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
E0103 20:01:02.922590  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
E0103 20:01:03.563578  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
E0103 20:01:04.843801  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
E0103 20:01:07.403979  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
E0103 20:01:12.524287  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-776813 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-776813 --driver=docker  --container-runtime=crio: (29.160639638s)
--- PASS: TestErrorSpam/setup (29.16s)

                                                
                                    
x
+
TestErrorSpam/start (0.88s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776813 --log_dir /tmp/nospam-776813 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776813 --log_dir /tmp/nospam-776813 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776813 --log_dir /tmp/nospam-776813 start --dry-run
--- PASS: TestErrorSpam/start (0.88s)

                                                
                                    
x
+
TestErrorSpam/status (1.18s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776813 --log_dir /tmp/nospam-776813 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776813 --log_dir /tmp/nospam-776813 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776813 --log_dir /tmp/nospam-776813 status
--- PASS: TestErrorSpam/status (1.18s)

                                                
                                    
x
+
TestErrorSpam/pause (1.96s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776813 --log_dir /tmp/nospam-776813 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776813 --log_dir /tmp/nospam-776813 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776813 --log_dir /tmp/nospam-776813 pause
--- PASS: TestErrorSpam/pause (1.96s)

                                                
                                    
x
+
TestErrorSpam/unpause (2s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776813 --log_dir /tmp/nospam-776813 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776813 --log_dir /tmp/nospam-776813 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776813 --log_dir /tmp/nospam-776813 unpause
--- PASS: TestErrorSpam/unpause (2.00s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776813 --log_dir /tmp/nospam-776813 stop
E0103 20:01:22.765254  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-776813 --log_dir /tmp/nospam-776813 stop: (1.231257103s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776813 --log_dir /tmp/nospam-776813 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776813 --log_dir /tmp/nospam-776813 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17885-409390/.minikube/files/etc/test/nested/copy/414763/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (72.8s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-arm64 start -p functional-155561 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0103 20:01:43.245516  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
E0103 20:02:24.206085  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
functional_test.go:2233: (dbg) Done: out/minikube-linux-arm64 start -p functional-155561 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m12.802649538s)
--- PASS: TestFunctional/serial/StartWithProxy (72.80s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.9s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-155561 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-155561 --alsologtostderr -v=8: (34.897801069s)
functional_test.go:659: soft start took 34.900233376s for "functional-155561" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.90s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-155561 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-155561 cache add registry.k8s.io/pause:3.1: (1.487225398s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-155561 cache add registry.k8s.io/pause:3.3: (1.451327653s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-155561 cache add registry.k8s.io/pause:latest: (1.328172263s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-155561 /tmp/TestFunctionalserialCacheCmdcacheadd_local2400497346/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 cache add minikube-local-cache-test:functional-155561
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 cache delete minikube-local-cache-test:functional-155561
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-155561
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-155561 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (358.476637ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-155561 cache reload: (1.106090046s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 kubectl -- --context functional-155561 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-155561 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.18s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-155561 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0103 20:03:46.126308  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-155561 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.270213267s)
functional_test.go:757: restart took 35.27031701s for "functional-155561" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-155561 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-155561 logs: (1.891057844s)
--- PASS: TestFunctional/serial/LogsCmd (1.89s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.93s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 logs --file /tmp/TestFunctionalserialLogsFileCmd3418854866/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-155561 logs --file /tmp/TestFunctionalserialLogsFileCmd3418854866/001/logs.txt: (1.92985026s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.93s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.5s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-155561 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-155561
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-155561: exit status 115 (503.268689ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31754 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-155561 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.50s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-155561 config get cpus: exit status 14 (102.897261ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-155561 config get cpus: exit status 14 (90.531285ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-155561 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-155561 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 439422: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.58s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-155561 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-155561 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (334.446122ms)

                                                
                                                
-- stdout --
	* [functional-155561] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 20:04:45.158927  439025 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:04:45.159147  439025 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:04:45.159157  439025 out.go:309] Setting ErrFile to fd 2...
	I0103 20:04:45.159164  439025 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:04:45.159475  439025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-409390/.minikube/bin
	I0103 20:04:45.159935  439025 out.go:303] Setting JSON to false
	I0103 20:04:45.161025  439025 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":6435,"bootTime":1704305851,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0103 20:04:45.161114  439025 start.go:138] virtualization:  
	I0103 20:04:45.171186  439025 out.go:177] * [functional-155561] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0103 20:04:45.173686  439025 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 20:04:45.176019  439025 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 20:04:45.174875  439025 notify.go:220] Checking for updates...
	I0103 20:04:45.180770  439025 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 20:04:45.182829  439025 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube
	I0103 20:04:45.185144  439025 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0103 20:04:45.187364  439025 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 20:04:45.190554  439025 config.go:182] Loaded profile config "functional-155561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:04:45.191814  439025 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 20:04:45.247752  439025 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 20:04:45.247927  439025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 20:04:45.371734  439025 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2024-01-03 20:04:45.360531859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 20:04:45.371852  439025 docker.go:295] overlay module found
	I0103 20:04:45.376322  439025 out.go:177] * Using the docker driver based on existing profile
	I0103 20:04:45.378682  439025 start.go:298] selected driver: docker
	I0103 20:04:45.378715  439025 start.go:902] validating driver "docker" against &{Name:functional-155561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-155561 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:04:45.378833  439025 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 20:04:45.381385  439025 out.go:177] 
	W0103 20:04:45.383487  439025 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0103 20:04:45.385366  439025 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-155561 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-155561 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-155561 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (264.80408ms)

                                                
                                                
-- stdout --
	* [functional-155561] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 20:04:44.860466  438986 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:04:44.860668  438986 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:04:44.860674  438986 out.go:309] Setting ErrFile to fd 2...
	I0103 20:04:44.860679  438986 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:04:44.862229  438986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-409390/.minikube/bin
	I0103 20:04:44.862737  438986 out.go:303] Setting JSON to false
	I0103 20:04:44.863814  438986 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":6434,"bootTime":1704305851,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0103 20:04:44.863898  438986 start.go:138] virtualization:  
	I0103 20:04:44.868017  438986 out.go:177] * [functional-155561] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0103 20:04:44.870035  438986 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 20:04:44.870097  438986 notify.go:220] Checking for updates...
	I0103 20:04:44.874570  438986 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 20:04:44.876522  438986 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 20:04:44.878642  438986 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube
	I0103 20:04:44.880731  438986 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0103 20:04:44.882665  438986 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 20:04:44.884977  438986 config.go:182] Loaded profile config "functional-155561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:04:44.885653  438986 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 20:04:44.911991  438986 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 20:04:44.912116  438986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 20:04:45.001275  438986 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2024-01-03 20:04:44.991444838 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 20:04:45.001376  438986 docker.go:295] overlay module found
	I0103 20:04:45.003903  438986 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0103 20:04:45.005779  438986 start.go:298] selected driver: docker
	I0103 20:04:45.005797  438986 start.go:902] validating driver "docker" against &{Name:functional-155561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-155561 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0103 20:04:45.005897  438986 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 20:04:45.021499  438986 out.go:177] 
	W0103 20:04:45.028987  438986 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0103 20:04:45.034011  438986 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-155561 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-155561 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-gxhmv" [ac7b95ef-8f4c-43bf-b530-15996a65472e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-gxhmv" [ac7b95ef-8f4c-43bf-b530-15996a65472e] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004294993s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:32342
functional_test.go:1674: http://192.168.49.2:32342: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-gxhmv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32342
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.78s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4847ef39-e510-48bb-81aa-512b3e41fcee] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00414405s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-155561 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-155561 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-155561 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-155561 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0c290983-5d60-4c95-8307-94a123bbf5a4] Pending
helpers_test.go:344: "sp-pod" [0c290983-5d60-4c95-8307-94a123bbf5a4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0c290983-5d60-4c95-8307-94a123bbf5a4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003936164s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-155561 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-155561 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-155561 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d6ecbe9e-0465-4f0b-b5a2-2dc34285edb7] Pending
helpers_test.go:344: "sp-pod" [d6ecbe9e-0465-4f0b-b5a2-2dc34285edb7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003878457s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-155561 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.95s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh -n functional-155561 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 cp functional-155561:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3097974640/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh -n functional-155561 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh -n functional-155561 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/414763/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh "sudo cat /etc/test/nested/copy/414763/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/414763.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh "sudo cat /etc/ssl/certs/414763.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/414763.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh "sudo cat /usr/share/ca-certificates/414763.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/4147632.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh "sudo cat /etc/ssl/certs/4147632.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/4147632.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh "sudo cat /usr/share/ca-certificates/4147632.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-155561 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-155561 ssh "sudo systemctl is-active docker": exit status 1 (435.001902ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-155561 ssh "sudo systemctl is-active containerd": exit status 1 (449.068514ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-155561 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-155561 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-155561 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 437098: os: process already finished
helpers_test.go:502: unable to terminate pid 436927: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-155561 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-155561 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-155561 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c00ebad8-ed6f-422e-9d26-b80bd2e3a819] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c00ebad8-ed6f-422e-9d26-b80bd2e3a819] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.00426657s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-155561 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.75.87 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-155561 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 437431: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-155561 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-155561 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-t49jx" [8b7b3a76-9344-49b5-9956-43ece05f3bdd] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-t49jx" [8b7b3a76-9344-49b5-9956-43ece05f3bdd] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.00351057s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "364.847391ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "70.70297ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "363.371239ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "72.789274ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-155561 /tmp/TestFunctionalparallelMountCmdany-port88281998/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1704312279303645071" to /tmp/TestFunctionalparallelMountCmdany-port88281998/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1704312279303645071" to /tmp/TestFunctionalparallelMountCmdany-port88281998/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1704312279303645071" to /tmp/TestFunctionalparallelMountCmdany-port88281998/001/test-1704312279303645071
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-155561 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (409.687789ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan  3 20:04 created-by-test
-rw-r--r-- 1 docker docker 24 Jan  3 20:04 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan  3 20:04 test-1704312279303645071
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh cat /mount-9p/test-1704312279303645071
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-155561 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d76e9b8d-8211-4905-b824-680e67b5f4e2] Pending
helpers_test.go:344: "busybox-mount" [d76e9b8d-8211-4905-b824-680e67b5f4e2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d76e9b8d-8211-4905-b824-680e67b5f4e2] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d76e9b8d-8211-4905-b824-680e67b5f4e2] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004558582s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-155561 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-155561 /tmp/TestFunctionalparallelMountCmdany-port88281998/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 service list -o json
functional_test.go:1493: Took "593.843022ms" to run "out/minikube-linux-arm64 -p functional-155561 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31547
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31547
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-155561 /tmp/TestFunctionalparallelMountCmdspecific-port4263585307/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-155561 /tmp/TestFunctionalparallelMountCmdspecific-port4263585307/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-155561 ssh "sudo umount -f /mount-9p": exit status 1 (423.308743ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-155561 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-155561 /tmp/TestFunctionalparallelMountCmdspecific-port4263585307/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (3.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-155561 /tmp/TestFunctionalparallelMountCmdVerifyCleanup769068437/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-155561 /tmp/TestFunctionalparallelMountCmdVerifyCleanup769068437/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-155561 /tmp/TestFunctionalparallelMountCmdVerifyCleanup769068437/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-155561 ssh "findmnt -T" /mount1: exit status 1 (1.50080426s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-155561 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-155561 /tmp/TestFunctionalparallelMountCmdVerifyCleanup769068437/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-155561 /tmp/TestFunctionalparallelMountCmdVerifyCleanup769068437/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-155561 /tmp/TestFunctionalparallelMountCmdVerifyCleanup769068437/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (3.45s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-linux-arm64 -p functional-155561 version -o=json --components: (1.559657987s)
--- PASS: TestFunctional/parallel/Version/components (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-155561 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-155561
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-155561 image ls --format short --alsologtostderr:
I0103 20:05:15.636892  441483 out.go:296] Setting OutFile to fd 1 ...
I0103 20:05:15.637087  441483 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 20:05:15.637097  441483 out.go:309] Setting ErrFile to fd 2...
I0103 20:05:15.637103  441483 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 20:05:15.637366  441483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-409390/.minikube/bin
I0103 20:05:15.638007  441483 config.go:182] Loaded profile config "functional-155561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 20:05:15.638168  441483 config.go:182] Loaded profile config "functional-155561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 20:05:15.638727  441483 cli_runner.go:164] Run: docker container inspect functional-155561 --format={{.State.Status}}
I0103 20:05:15.665817  441483 ssh_runner.go:195] Run: systemctl --version
I0103 20:05:15.665873  441483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-155561
I0103 20:05:15.689591  441483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/functional-155561/id_rsa Username:docker}
I0103 20:05:15.803188  441483 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-155561 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| gcr.io/google-containers/addon-resizer  | functional-155561  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/library/nginx                 | alpine             | 74077e780ec71 | 45.3MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 04b4c447bb9d4 | 121MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 3ca3ca488cf13 | 70MB   |
| registry.k8s.io/kube-scheduler          | v1.28.4            | 05c284c929889 | 59.3MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| docker.io/library/nginx                 | latest             | 8aea65d81da20 | 196MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | 9961cbceaf234 | 117MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-155561 image ls --format table --alsologtostderr:
I0103 20:05:16.330194  441617 out.go:296] Setting OutFile to fd 1 ...
I0103 20:05:16.330421  441617 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 20:05:16.330436  441617 out.go:309] Setting ErrFile to fd 2...
I0103 20:05:16.330443  441617 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 20:05:16.330785  441617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-409390/.minikube/bin
I0103 20:05:16.331554  441617 config.go:182] Loaded profile config "functional-155561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 20:05:16.331751  441617 config.go:182] Loaded profile config "functional-155561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 20:05:16.332319  441617 cli_runner.go:164] Run: docker container inspect functional-155561 --format={{.State.Status}}
I0103 20:05:16.356857  441617 ssh_runner.go:195] Run: systemctl --version
I0103 20:05:16.356915  441617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-155561
I0103 20:05:16.378312  441617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/functional-155561/id_rsa Username:docker}
I0103 20:05:16.477130  441617 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-155561 image ls --format json --alsologtostderr:
[{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"8aea65d81da202cf886d7766c7f2691bb9e363c6b5d9b1f5d9ddaaa4bc1e90c2","repoDigests":["docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d0
0385c6f8bc8a8b320d0a294d9e3b49026","docker.io/library/nginx@sha256:73a06b3a2577448f9acc23502a0cb4d41919da9cc5035e66b0a9a09715397684"],"repoTags":["docker.io/library/nginx:latest"],"size":"196113558"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"69992343"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.i
o/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb","registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"121119694"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":
"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"59253556"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e"
,"repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"74077e780ec714353793e0ef5677b55d7396aa1d31e77ec899f54842f7142448","repoDigests":["docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb","docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45330189"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87be
b","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"117252916"},{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0
b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-155561"],"size":"34114467"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-155561 image ls --format json --alsologtostderr:
I0103 20:05:16.013248  441543 out.go:296] Setting OutFile to fd 1 ...
I0103 20:05:16.013501  441543 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 20:05:16.013508  441543 out.go:309] Setting ErrFile to fd 2...
I0103 20:05:16.013514  441543 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 20:05:16.013853  441543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-409390/.minikube/bin
I0103 20:05:16.014829  441543 config.go:182] Loaded profile config "functional-155561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 20:05:16.015019  441543 config.go:182] Loaded profile config "functional-155561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 20:05:16.015784  441543 cli_runner.go:164] Run: docker container inspect functional-155561 --format={{.State.Status}}
I0103 20:05:16.045070  441543 ssh_runner.go:195] Run: systemctl --version
I0103 20:05:16.045131  441543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-155561
I0103 20:05:16.075386  441543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/functional-155561/id_rsa Username:docker}
I0103 20:05:16.181537  441543 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-155561 image ls --format yaml --alsologtostderr:
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
- registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "121119694"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 8aea65d81da202cf886d7766c7f2691bb9e363c6b5d9b1f5d9ddaaa4bc1e90c2
repoDigests:
- docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026
- docker.io/library/nginx@sha256:73a06b3a2577448f9acc23502a0cb4d41919da9cc5035e66b0a9a09715397684
repoTags:
- docker.io/library/nginx:latest
size: "196113558"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-155561
size: "34114467"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "59253556"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "117252916"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: 74077e780ec714353793e0ef5677b55d7396aa1d31e77ec899f54842f7142448
repoDigests:
- docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb
- docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59
repoTags:
- docker.io/library/nginx:alpine
size: "45330189"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "69992343"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-155561 image ls --format yaml --alsologtostderr:
I0103 20:05:15.631482  441484 out.go:296] Setting OutFile to fd 1 ...
I0103 20:05:15.631741  441484 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 20:05:15.631770  441484 out.go:309] Setting ErrFile to fd 2...
I0103 20:05:15.631791  441484 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 20:05:15.632099  441484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-409390/.minikube/bin
I0103 20:05:15.632795  441484 config.go:182] Loaded profile config "functional-155561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 20:05:15.632980  441484 config.go:182] Loaded profile config "functional-155561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 20:05:15.633616  441484 cli_runner.go:164] Run: docker container inspect functional-155561 --format={{.State.Status}}
I0103 20:05:15.656329  441484 ssh_runner.go:195] Run: systemctl --version
I0103 20:05:15.656381  441484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-155561
I0103 20:05:15.680206  441484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/functional-155561/id_rsa Username:docker}
I0103 20:05:15.776364  441484 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-155561 ssh pgrep buildkitd: exit status 1 (425.601302ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 image build -t localhost/my-image:functional-155561 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-155561 image build -t localhost/my-image:functional-155561 testdata/build --alsologtostderr: (2.549128511s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-155561 image build -t localhost/my-image:functional-155561 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 11485f147dc
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-155561
--> 151bc1288a3
Successfully tagged localhost/my-image:functional-155561
151bc1288a381cddc579f913d67ef800759fb890d54302cc5777a6d64d97434a
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-155561 image build -t localhost/my-image:functional-155561 testdata/build --alsologtostderr:
I0103 20:05:16.351005  441622 out.go:296] Setting OutFile to fd 1 ...
I0103 20:05:16.351946  441622 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 20:05:16.351993  441622 out.go:309] Setting ErrFile to fd 2...
I0103 20:05:16.352014  441622 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 20:05:16.352338  441622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-409390/.minikube/bin
I0103 20:05:16.353336  441622 config.go:182] Loaded profile config "functional-155561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 20:05:16.356185  441622 config.go:182] Loaded profile config "functional-155561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0103 20:05:16.356971  441622 cli_runner.go:164] Run: docker container inspect functional-155561 --format={{.State.Status}}
I0103 20:05:16.392968  441622 ssh_runner.go:195] Run: systemctl --version
I0103 20:05:16.393020  441622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-155561
I0103 20:05:16.414117  441622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/functional-155561/id_rsa Username:docker}
I0103 20:05:16.526716  441622 build_images.go:151] Building image from path: /tmp/build.2944597616.tar
I0103 20:05:16.526808  441622 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0103 20:05:16.552814  441622 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2944597616.tar
I0103 20:05:16.559004  441622 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2944597616.tar: stat -c "%s %y" /var/lib/minikube/build/build.2944597616.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2944597616.tar': No such file or directory
I0103 20:05:16.559037  441622 ssh_runner.go:362] scp /tmp/build.2944597616.tar --> /var/lib/minikube/build/build.2944597616.tar (3072 bytes)
I0103 20:05:16.591673  441622 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2944597616
I0103 20:05:16.603342  441622 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2944597616 -xf /var/lib/minikube/build/build.2944597616.tar
I0103 20:05:16.614760  441622 crio.go:297] Building image: /var/lib/minikube/build/build.2944597616
I0103 20:05:16.614872  441622 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-155561 /var/lib/minikube/build/build.2944597616 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0103 20:05:18.777850  441622 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-155561 /var/lib/minikube/build/build.2944597616 --cgroup-manager=cgroupfs: (2.162930561s)
I0103 20:05:18.777917  441622 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2944597616
I0103 20:05:18.788402  441622 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2944597616.tar
I0103 20:05:18.799057  441622 build_images.go:207] Built localhost/my-image:functional-155561 from /tmp/build.2944597616.tar
I0103 20:05:18.799090  441622 build_images.go:123] succeeded building to: functional-155561
I0103 20:05:18.799096  441622 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
2024/01/03 20:04:56 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.717162236s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-155561
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 image load --daemon gcr.io/google-containers/addon-resizer:functional-155561 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-155561 image load --daemon gcr.io/google-containers/addon-resizer:functional-155561 --alsologtostderr: (5.201144845s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.47s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 image load --daemon gcr.io/google-containers/addon-resizer:functional-155561 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-155561 image load --daemon gcr.io/google-containers/addon-resizer:functional-155561 --alsologtostderr: (2.669462231s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.456730012s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-155561
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 image load --daemon gcr.io/google-containers/addon-resizer:functional-155561 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-155561 image load --daemon gcr.io/google-containers/addon-resizer:functional-155561 --alsologtostderr: (3.752661378s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 image save gcr.io/google-containers/addon-resizer:functional-155561 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 image rm gcr.io/google-containers/addon-resizer:functional-155561 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-155561 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.028256495s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-155561
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-155561 image save --daemon gcr.io/google-containers/addon-resizer:functional-155561 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-155561
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-155561
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-155561
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-155561
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (89.29s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-480050 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0103 20:06:02.278731  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
E0103 20:06:29.967116  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-480050 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m29.28921875s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (89.29s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.48s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-480050 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-480050 addons enable ingress --alsologtostderr -v=5: (12.478519841s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.48s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.69s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-480050 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.69s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.31s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-668949 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0103 20:10:33.435760  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-668949 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (52.309595355s)
--- PASS: TestJSONOutput/start/Command (52.31s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.85s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-668949 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.85s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.02s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-668949 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 unpause -p json-output-668949 --output=json --user=testUser: (1.022956809s)
--- PASS: TestJSONOutput/unpause/Command (1.02s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-668949 --output=json --user=testUser
E0103 20:11:02.279315  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-668949 --output=json --user=testUser: (5.996129334s)
--- PASS: TestJSONOutput/stop/Command (6.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-410091 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-410091 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.291047ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0ac60eb5-57a4-49b6-8385-2437817e2531","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-410091] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"66e9cc68-cb84-4813-9884-fe5615f1a71a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17885"}}
	{"specversion":"1.0","id":"b0604667-1352-4912-bfd7-04168358e554","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b71dde9b-543e-4b90-93b6-34826306debd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig"}}
	{"specversion":"1.0","id":"f79ab39c-128a-4b0a-9fd2-f824462e645d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube"}}
	{"specversion":"1.0","id":"b161907c-97a9-4169-9e1a-bfd83479561c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b8304d1e-3d9e-40a0-8c3b-f91ed9fc8b37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f87f29b1-bb5d-4461-b40a-bd1480a61cc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-410091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-410091
--- PASS: TestErrorJSONOutput (0.27s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.01s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-506821 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-506821 --network=: (41.902172925s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-506821" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-506821
E0103 20:11:55.355993  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-506821: (2.080558833s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.01s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.66s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-081731 --network=bridge
E0103 20:12:04.569099  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
E0103 20:12:04.574595  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
E0103 20:12:04.585584  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
E0103 20:12:04.605919  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
E0103 20:12:04.646238  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
E0103 20:12:04.726533  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
E0103 20:12:04.886829  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
E0103 20:12:05.207352  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
E0103 20:12:05.847735  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
E0103 20:12:07.128517  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
E0103 20:12:09.688732  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
E0103 20:12:14.808929  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
E0103 20:12:25.049133  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-081731 --network=bridge: (33.581549271s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-081731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-081731
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-081731: (2.049714359s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.66s)

                                                
                                    
x
+
TestKicExistingNetwork (35.15s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-044627 --network=existing-network
E0103 20:12:45.529344  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-044627 --network=existing-network: (32.853524577s)
helpers_test.go:175: Cleaning up "existing-network-044627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-044627
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-044627: (2.112330526s)
--- PASS: TestKicExistingNetwork (35.15s)

                                                
                                    
x
+
TestKicCustomSubnet (36.3s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-223546 --subnet=192.168.60.0/24
E0103 20:13:26.490647  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-223546 --subnet=192.168.60.0/24: (34.091027735s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-223546 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-223546" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-223546
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-223546: (2.182972969s)
--- PASS: TestKicCustomSubnet (36.30s)

                                                
                                    
x
+
TestKicStaticIP (35.12s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-999852 --static-ip=192.168.200.200
E0103 20:14:11.515743  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-999852 --static-ip=192.168.200.200: (32.765292235s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-999852 ip
helpers_test.go:175: Cleaning up "static-ip-999852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-999852
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-999852: (2.165393643s)
--- PASS: TestKicStaticIP (35.12s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (70.03s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-357759 --driver=docker  --container-runtime=crio
E0103 20:14:39.197073  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
E0103 20:14:48.410859  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-357759 --driver=docker  --container-runtime=crio: (31.691846575s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-360343 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-360343 --driver=docker  --container-runtime=crio: (32.440490245s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-357759
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-360343
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-360343" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-360343
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-360343: (2.147994009s)
helpers_test.go:175: Cleaning up "first-357759" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-357759
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-357759: (2.452789988s)
--- PASS: TestMinikubeProfile (70.03s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-831368 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-831368 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.899694919s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-831368 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-833444 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-833444 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.121820627s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-833444 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.78s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-831368 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-831368 --alsologtostderr -v=5: (1.777784984s)
--- PASS: TestMountStart/serial/DeleteFirst (1.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-833444 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-833444
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-833444: (1.226115699s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.99s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-833444
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-833444: (6.994266519s)
--- PASS: TestMountStart/serial/RestartStopped (7.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-833444 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (124.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-004925 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0103 20:16:02.280206  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
E0103 20:17:04.568116  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
E0103 20:17:25.328179  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
E0103 20:17:32.251045  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-004925 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m4.279963532s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (124.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-004925 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-004925 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-004925 -- rollout status deployment/busybox: (3.264449526s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-004925 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-004925 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-004925 -- exec busybox-5bc68d56bd-fs9dz -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-004925 -- exec busybox-5bc68d56bd-m75vn -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-004925 -- exec busybox-5bc68d56bd-fs9dz -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-004925 -- exec busybox-5bc68d56bd-m75vn -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-004925 -- exec busybox-5bc68d56bd-fs9dz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-004925 -- exec busybox-5bc68d56bd-m75vn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.41s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-004925 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-004925 -v 3 --alsologtostderr: (49.842623375s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.58s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-004925 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 cp testdata/cp-test.txt multinode-004925:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 ssh -n multinode-004925 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 cp multinode-004925:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2371258798/001/cp-test_multinode-004925.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 ssh -n multinode-004925 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 cp multinode-004925:/home/docker/cp-test.txt multinode-004925-m02:/home/docker/cp-test_multinode-004925_multinode-004925-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 ssh -n multinode-004925 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 ssh -n multinode-004925-m02 "sudo cat /home/docker/cp-test_multinode-004925_multinode-004925-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 cp multinode-004925:/home/docker/cp-test.txt multinode-004925-m03:/home/docker/cp-test_multinode-004925_multinode-004925-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 ssh -n multinode-004925 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 ssh -n multinode-004925-m03 "sudo cat /home/docker/cp-test_multinode-004925_multinode-004925-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 cp testdata/cp-test.txt multinode-004925-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 ssh -n multinode-004925-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 cp multinode-004925-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2371258798/001/cp-test_multinode-004925-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 ssh -n multinode-004925-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 cp multinode-004925-m02:/home/docker/cp-test.txt multinode-004925:/home/docker/cp-test_multinode-004925-m02_multinode-004925.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 ssh -n multinode-004925-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 ssh -n multinode-004925 "sudo cat /home/docker/cp-test_multinode-004925-m02_multinode-004925.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 cp multinode-004925-m02:/home/docker/cp-test.txt multinode-004925-m03:/home/docker/cp-test_multinode-004925-m02_multinode-004925-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 ssh -n multinode-004925-m02 "sudo cat /home/docker/cp-test.txt"
E0103 20:19:11.513900  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 ssh -n multinode-004925-m03 "sudo cat /home/docker/cp-test_multinode-004925-m02_multinode-004925-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 cp testdata/cp-test.txt multinode-004925-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 ssh -n multinode-004925-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 cp multinode-004925-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2371258798/001/cp-test_multinode-004925-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 ssh -n multinode-004925-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 cp multinode-004925-m03:/home/docker/cp-test.txt multinode-004925:/home/docker/cp-test_multinode-004925-m03_multinode-004925.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 ssh -n multinode-004925-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 ssh -n multinode-004925 "sudo cat /home/docker/cp-test_multinode-004925-m03_multinode-004925.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 cp multinode-004925-m03:/home/docker/cp-test.txt multinode-004925-m02:/home/docker/cp-test_multinode-004925-m03_multinode-004925-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 ssh -n multinode-004925-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 ssh -n multinode-004925-m02 "sudo cat /home/docker/cp-test_multinode-004925-m03_multinode-004925-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.52s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-004925 node stop m03: (1.243368803s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-004925 status: exit status 7 (576.969793ms)

                                                
                                                
-- stdout --
	multinode-004925
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-004925-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-004925-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-004925 status --alsologtostderr: exit status 7 (565.520836ms)

                                                
                                                
-- stdout --
	multinode-004925
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-004925-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-004925-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 20:19:17.316114  488193 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:19:17.316263  488193 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:19:17.316272  488193 out.go:309] Setting ErrFile to fd 2...
	I0103 20:19:17.316278  488193 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:19:17.316622  488193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-409390/.minikube/bin
	I0103 20:19:17.316837  488193 out.go:303] Setting JSON to false
	I0103 20:19:17.316929  488193 mustload.go:65] Loading cluster: multinode-004925
	I0103 20:19:17.317689  488193 config.go:182] Loaded profile config "multinode-004925": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:19:17.317724  488193 status.go:255] checking status of multinode-004925 ...
	I0103 20:19:17.318709  488193 notify.go:220] Checking for updates...
	I0103 20:19:17.319218  488193 cli_runner.go:164] Run: docker container inspect multinode-004925 --format={{.State.Status}}
	I0103 20:19:17.341944  488193 status.go:330] multinode-004925 host status = "Running" (err=<nil>)
	I0103 20:19:17.341965  488193 host.go:66] Checking if "multinode-004925" exists ...
	I0103 20:19:17.342255  488193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-004925
	I0103 20:19:17.360165  488193 host.go:66] Checking if "multinode-004925" exists ...
	I0103 20:19:17.360503  488193 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 20:19:17.360622  488193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-004925
	I0103 20:19:17.383313  488193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/multinode-004925/id_rsa Username:docker}
	I0103 20:19:17.481147  488193 ssh_runner.go:195] Run: systemctl --version
	I0103 20:19:17.486837  488193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:19:17.500192  488193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 20:19:17.577902  488193 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2024-01-03 20:19:17.568242396 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 20:19:17.578473  488193 kubeconfig.go:92] found "multinode-004925" server: "https://192.168.58.2:8443"
	I0103 20:19:17.578547  488193 api_server.go:166] Checking apiserver status ...
	I0103 20:19:17.578623  488193 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0103 20:19:17.592935  488193 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1270/cgroup
	I0103 20:19:17.604053  488193 api_server.go:182] apiserver freezer: "9:freezer:/docker/a8b5d16b1951a483b6aef678c3a9e4af0d01efc76e4ab6e72a5f853aa60f6da3/crio/crio-0697f086eb96a9ef7daccbd103c6c4cf9d02b28f9c97f681ca20efc9ed793bf8"
	I0103 20:19:17.604121  488193 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a8b5d16b1951a483b6aef678c3a9e4af0d01efc76e4ab6e72a5f853aa60f6da3/crio/crio-0697f086eb96a9ef7daccbd103c6c4cf9d02b28f9c97f681ca20efc9ed793bf8/freezer.state
	I0103 20:19:17.614509  488193 api_server.go:204] freezer state: "THAWED"
	I0103 20:19:17.614554  488193 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0103 20:19:17.623559  488193 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0103 20:19:17.623587  488193 status.go:421] multinode-004925 apiserver status = Running (err=<nil>)
	I0103 20:19:17.623598  488193 status.go:257] multinode-004925 status: &{Name:multinode-004925 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0103 20:19:17.623646  488193 status.go:255] checking status of multinode-004925-m02 ...
	I0103 20:19:17.623970  488193 cli_runner.go:164] Run: docker container inspect multinode-004925-m02 --format={{.State.Status}}
	I0103 20:19:17.642221  488193 status.go:330] multinode-004925-m02 host status = "Running" (err=<nil>)
	I0103 20:19:17.642249  488193 host.go:66] Checking if "multinode-004925-m02" exists ...
	I0103 20:19:17.642570  488193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-004925-m02
	I0103 20:19:17.659991  488193 host.go:66] Checking if "multinode-004925-m02" exists ...
	I0103 20:19:17.660297  488193 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0103 20:19:17.660345  488193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-004925-m02
	I0103 20:19:17.679896  488193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/17885-409390/.minikube/machines/multinode-004925-m02/id_rsa Username:docker}
	I0103 20:19:17.776904  488193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0103 20:19:17.790866  488193 status.go:257] multinode-004925-m02 status: &{Name:multinode-004925-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0103 20:19:17.790900  488193 status.go:255] checking status of multinode-004925-m03 ...
	I0103 20:19:17.791220  488193 cli_runner.go:164] Run: docker container inspect multinode-004925-m03 --format={{.State.Status}}
	I0103 20:19:17.809074  488193 status.go:330] multinode-004925-m03 host status = "Stopped" (err=<nil>)
	I0103 20:19:17.809097  488193 status.go:343] host is not running, skipping remaining checks
	I0103 20:19:17.809104  488193 status.go:257] multinode-004925-m03 status: &{Name:multinode-004925-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-004925 node start m03 --alsologtostderr: (12.314580765s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (121.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-004925
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-004925
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-004925: (24.960242974s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-004925 --wait=true -v=8 --alsologtostderr
E0103 20:21:02.278760  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-004925 --wait=true -v=8 --alsologtostderr: (1m36.634827162s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-004925
--- PASS: TestMultiNode/serial/RestartKeepsNodes (121.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-004925 node delete m03: (4.456575506s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-004925 stop: (23.823105838s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-004925 status: exit status 7 (111.79764ms)

                                                
                                                
-- stdout --
	multinode-004925
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-004925-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-004925 status --alsologtostderr: exit status 7 (129.681797ms)

                                                
                                                
-- stdout --
	multinode-004925
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-004925-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 20:22:02.059237  496353 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:22:02.059489  496353 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:22:02.059519  496353 out.go:309] Setting ErrFile to fd 2...
	I0103 20:22:02.059540  496353 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:22:02.059876  496353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-409390/.minikube/bin
	I0103 20:22:02.060102  496353 out.go:303] Setting JSON to false
	I0103 20:22:02.060263  496353 notify.go:220] Checking for updates...
	I0103 20:22:02.061029  496353 mustload.go:65] Loading cluster: multinode-004925
	I0103 20:22:02.061510  496353 config.go:182] Loaded profile config "multinode-004925": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:22:02.061558  496353 status.go:255] checking status of multinode-004925 ...
	I0103 20:22:02.062106  496353 cli_runner.go:164] Run: docker container inspect multinode-004925 --format={{.State.Status}}
	I0103 20:22:02.083273  496353 status.go:330] multinode-004925 host status = "Stopped" (err=<nil>)
	I0103 20:22:02.083294  496353 status.go:343] host is not running, skipping remaining checks
	I0103 20:22:02.083302  496353 status.go:257] multinode-004925 status: &{Name:multinode-004925 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0103 20:22:02.083335  496353 status.go:255] checking status of multinode-004925-m02 ...
	I0103 20:22:02.083653  496353 cli_runner.go:164] Run: docker container inspect multinode-004925-m02 --format={{.State.Status}}
	I0103 20:22:02.102409  496353 status.go:330] multinode-004925-m02 host status = "Stopped" (err=<nil>)
	I0103 20:22:02.102443  496353 status.go:343] host is not running, skipping remaining checks
	I0103 20:22:02.102456  496353 status.go:257] multinode-004925-m02 status: &{Name:multinode-004925-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (79.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-004925 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0103 20:22:04.568120  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-004925 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m18.779736713s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-004925 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (79.59s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-004925
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-004925-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-004925-m02 --driver=docker  --container-runtime=crio: exit status 14 (102.043519ms)

                                                
                                                
-- stdout --
	* [multinode-004925-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-004925-m02' is duplicated with machine name 'multinode-004925-m02' in profile 'multinode-004925'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-004925-m03 --driver=docker  --container-runtime=crio
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-004925-m03 --driver=docker  --container-runtime=crio: (32.791496452s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-004925
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-004925: exit status 80 (338.316775ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-004925
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-004925-m03 already exists in multinode-004925-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-004925-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-004925-m03: (2.104383466s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.40s)

                                                
                                    
x
+
TestPreload (180.38s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-036261 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0103 20:24:11.513690  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-036261 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m28.353171625s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-036261 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-036261 image pull gcr.io/k8s-minikube/busybox: (2.458611327s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-036261
E0103 20:25:34.557519  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-036261: (5.903313031s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-036261 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0103 20:26:02.278641  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-036261 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m21.008041944s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-036261 image list
helpers_test.go:175: Cleaning up "test-preload-036261" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-036261
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-036261: (2.38259648s)
--- PASS: TestPreload (180.38s)

                                                
                                    
x
+
TestScheduledStopUnix (110.72s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-487754 --memory=2048 --driver=docker  --container-runtime=crio
E0103 20:27:04.568551  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-487754 --memory=2048 --driver=docker  --container-runtime=crio: (34.346718677s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-487754 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-487754 -n scheduled-stop-487754
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-487754 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-487754 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-487754 -n scheduled-stop-487754
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-487754
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-487754 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0103 20:28:27.612164  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-487754
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-487754: exit status 7 (89.939845ms)

                                                
                                                
-- stdout --
	scheduled-stop-487754
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-487754 -n scheduled-stop-487754
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-487754 -n scheduled-stop-487754: exit status 7 (90.813389ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-487754" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-487754
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-487754: (4.485055658s)
--- PASS: TestScheduledStopUnix (110.72s)

                                                
                                    
x
+
TestInsufficientStorage (13.6s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-060833 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-060833 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.954638744s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8b76a82e-db78-40c0-981a-93423427ad4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-060833] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f3b2e4f6-2b9a-4299-bcb9-1d84c690322c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17885"}}
	{"specversion":"1.0","id":"7b6d02d1-61b3-4035-8a1f-0e768423f1b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"62fa72bc-de36-4945-b4be-edcd1d343c1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig"}}
	{"specversion":"1.0","id":"d127952a-a691-40cd-adeb-29f41109547b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube"}}
	{"specversion":"1.0","id":"9a6cf3f3-21d8-4c0b-8835-f721cda75a82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"3380273b-bb8d-43d1-97a5-162166e5e790","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"242a04c6-04cf-4625-887c-0890bb24a020","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"879c9beb-f38d-4a3b-94d2-81ed077e31bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ce3e6b4b-31a8-41a3-a25a-a12107addbc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d7faa553-28fa-434e-abc0-f696b27abd00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"867efa6b-5f99-4691-a16c-9de1724a8384","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-060833 in cluster insufficient-storage-060833","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c5333867-e46f-462c-b9ee-4c62ab8e2233","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1703498848-17857 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"430fe940-fb39-468b-b5f5-40676969db7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d1089188-abfa-4a5e-b090-f38092e05655","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-060833 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-060833 --output=json --layout=cluster: exit status 7 (336.807303ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-060833","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-060833","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 20:29:06.100916  512802 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-060833" does not appear in /home/jenkins/minikube-integration/17885-409390/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-060833 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-060833 --output=json --layout=cluster: exit status 7 (343.190847ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-060833","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-060833","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0103 20:29:06.445438  512855 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-060833" does not appear in /home/jenkins/minikube-integration/17885-409390/kubeconfig
	E0103 20:29:06.457964  512855 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/insufficient-storage-060833/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-060833" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-060833
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-060833: (1.967279372s)
--- PASS: TestInsufficientStorage (13.60s)

                                                
                                    
x
+
TestKubernetesUpgrade (418.23s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-753304 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0103 20:31:02.282916  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-753304 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m12.138836716s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-753304
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-753304: (1.293938423s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-753304 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-753304 status --format={{.Host}}: exit status 7 (96.582361ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-753304 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-753304 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m44.77991415s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-753304 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-753304 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-753304 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (128.89919ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-753304] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-753304
	    minikube start -p kubernetes-upgrade-753304 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7533042 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-753304 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-753304 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-753304 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (56.929671128s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-753304" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-753304
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-753304: (2.726580027s)
--- PASS: TestKubernetesUpgrade (418.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-301144 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-301144 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (90.674144ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-301144] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-301144 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-301144 --driver=docker  --container-runtime=crio: (41.633814747s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-301144 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-301144 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-301144 --no-kubernetes --driver=docker  --container-runtime=crio: (5.886956593s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-301144 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-301144 status -o json: exit status 2 (422.396956ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-301144","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-301144
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-301144: (2.140935928s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-301144 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-301144 --no-kubernetes --driver=docker  --container-runtime=crio: (9.677811964s)
--- PASS: TestNoKubernetes/serial/Start (9.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-301144 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-301144 "sudo systemctl is-active --quiet service kubelet": exit status 1 (352.112701ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-301144
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-301144: (1.373692503s)
--- PASS: TestNoKubernetes/serial/Stop (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-301144 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-301144 --driver=docker  --container-runtime=crio: (7.765692689s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-301144 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-301144 "sudo systemctl is-active --quiet service kubelet": exit status 1 (317.214601ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-077088
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                    
x
+
TestPause/serial/Start (83.67s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-589189 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0103 20:36:02.278785  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-589189 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m23.66913434s)
--- PASS: TestPause/serial/Start (83.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-942650 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-942650 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (366.062807ms)

                                                
                                                
-- stdout --
	* [false-942650] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17885
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0103 20:37:45.621965  551668 out.go:296] Setting OutFile to fd 1 ...
	I0103 20:37:45.622223  551668 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:37:45.622249  551668 out.go:309] Setting ErrFile to fd 2...
	I0103 20:37:45.622270  551668 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0103 20:37:45.626619  551668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-409390/.minikube/bin
	I0103 20:37:45.627192  551668 out.go:303] Setting JSON to false
	I0103 20:37:45.628221  551668 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":8415,"bootTime":1704305851,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0103 20:37:45.628330  551668 start.go:138] virtualization:  
	I0103 20:37:45.630813  551668 out.go:177] * [false-942650] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0103 20:37:45.632514  551668 out.go:177]   - MINIKUBE_LOCATION=17885
	I0103 20:37:45.633945  551668 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0103 20:37:45.632708  551668 notify.go:220] Checking for updates...
	I0103 20:37:45.636803  551668 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17885-409390/kubeconfig
	I0103 20:37:45.638486  551668 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-409390/.minikube
	I0103 20:37:45.640274  551668 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0103 20:37:45.641876  551668 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0103 20:37:45.644031  551668 config.go:182] Loaded profile config "force-systemd-flag-518436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0103 20:37:45.644203  551668 driver.go:392] Setting default libvirt URI to qemu:///system
	I0103 20:37:45.704520  551668 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0103 20:37:45.704632  551668 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0103 20:37:45.860939  551668 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-03 20:37:45.840851286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0103 20:37:45.861065  551668 docker.go:295] overlay module found
	I0103 20:37:45.862870  551668 out.go:177] * Using the docker driver based on user configuration
	I0103 20:37:45.864476  551668 start.go:298] selected driver: docker
	I0103 20:37:45.864490  551668 start.go:902] validating driver "docker" against <nil>
	I0103 20:37:45.864502  551668 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0103 20:37:45.866506  551668 out.go:177] 
	W0103 20:37:45.868051  551668 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0103 20:37:45.869518  551668 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-942650 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-942650

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-942650

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-942650

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-942650

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-942650

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-942650

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-942650

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-942650

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-942650

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-942650

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-942650

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-942650" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-942650" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-942650

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-942650"

                                                
                                                
----------------------- debugLogs end: false-942650 [took: 5.87129039s] --------------------------------
helpers_test.go:175: Cleaning up "false-942650" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-942650
--- PASS: TestNetworkPlugins/group/false (6.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (121.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-603571 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0103 20:41:02.279375  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-603571 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m1.692703889s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (121.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-603571 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8fee07e5-ecdc-458b-b5b2-ed72cc5db127] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8fee07e5-ecdc-458b-b5b2-ed72cc5db127] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.00301017s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-603571 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-603571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-603571 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-603571 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-603571 --alsologtostderr -v=3: (12.01702041s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-603571 -n old-k8s-version-603571
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-603571 -n old-k8s-version-603571: exit status 7 (94.725594ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-603571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (439.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-603571 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-603571 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m18.745574775s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-603571 -n old-k8s-version-603571
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (439.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (67.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-261206 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0103 20:42:14.557971  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-261206 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m7.843489667s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (67.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-261206 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [67b01485-e3d2-4a59-85e7-9af2d03003df] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [67b01485-e3d2-4a59-85e7-9af2d03003df] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003671779s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-261206 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-261206 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-261206 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.030062585s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-261206 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-261206 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-261206 --alsologtostderr -v=3: (12.063731652s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-261206 -n no-preload-261206
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-261206 -n no-preload-261206: exit status 7 (92.557966ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-261206 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (624.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-261206 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0103 20:44:11.513805  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
E0103 20:45:07.612534  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
E0103 20:46:02.279469  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
E0103 20:47:04.568861  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-261206 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (10m24.177534791s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-261206 -n no-preload-261206
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (624.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-kfb4t" [8b1080c7-d9b9-4ccb-82c8-d079c353d887] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004285045s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-kfb4t" [8b1080c7-d9b9-4ccb-82c8-d079c353d887] Running
E0103 20:49:11.514391  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004001118s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-603571 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-603571 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-603571 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-603571 -n old-k8s-version-603571
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-603571 -n old-k8s-version-603571: exit status 2 (393.765295ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-603571 -n old-k8s-version-603571
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-603571 -n old-k8s-version-603571: exit status 2 (381.26305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-603571 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-603571 -n old-k8s-version-603571
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-603571 -n old-k8s-version-603571
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (82.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-089478 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-089478 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m22.701670119s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (82.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-089478 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b12a6555-c208-4538-8ec9-85419c8d9f20] Pending
helpers_test.go:344: "busybox" [b12a6555-c208-4538-8ec9-85419c8d9f20] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0103 20:50:45.329094  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
helpers_test.go:344: "busybox" [b12a6555-c208-4538-8ec9-85419c8d9f20] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003153466s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-089478 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-089478 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-089478 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.052863623s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-089478 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-089478 --alsologtostderr -v=3
E0103 20:51:02.278496  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-089478 --alsologtostderr -v=3: (12.055323854s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-089478 -n embed-certs-089478
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-089478 -n embed-certs-089478: exit status 7 (103.215701ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-089478 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (352.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-089478 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0103 20:51:17.848550  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/old-k8s-version-603571/client.crt: no such file or directory
E0103 20:51:17.853836  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/old-k8s-version-603571/client.crt: no such file or directory
E0103 20:51:17.864138  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/old-k8s-version-603571/client.crt: no such file or directory
E0103 20:51:17.884383  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/old-k8s-version-603571/client.crt: no such file or directory
E0103 20:51:17.924654  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/old-k8s-version-603571/client.crt: no such file or directory
E0103 20:51:18.005108  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/old-k8s-version-603571/client.crt: no such file or directory
E0103 20:51:18.165474  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/old-k8s-version-603571/client.crt: no such file or directory
E0103 20:51:18.485588  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/old-k8s-version-603571/client.crt: no such file or directory
E0103 20:51:19.125893  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/old-k8s-version-603571/client.crt: no such file or directory
E0103 20:51:20.406632  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/old-k8s-version-603571/client.crt: no such file or directory
E0103 20:51:22.966765  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/old-k8s-version-603571/client.crt: no such file or directory
E0103 20:51:28.087105  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/old-k8s-version-603571/client.crt: no such file or directory
E0103 20:51:38.328285  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/old-k8s-version-603571/client.crt: no such file or directory
E0103 20:51:58.808539  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/old-k8s-version-603571/client.crt: no such file or directory
E0103 20:52:04.569153  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
E0103 20:52:39.768915  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/old-k8s-version-603571/client.crt: no such file or directory
E0103 20:54:01.689396  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/old-k8s-version-603571/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-089478 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m52.128770793s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-089478 -n embed-certs-089478
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (352.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kzhx7" [fa1dcc36-b5c0-4d23-86fc-65e8178ce48a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004311713s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kzhx7" [fa1dcc36-b5c0-4d23-86fc-65e8178ce48a] Running
E0103 20:54:11.514557  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004219475s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-261206 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-261206 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-261206 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-261206 -n no-preload-261206
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-261206 -n no-preload-261206: exit status 2 (368.87663ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-261206 -n no-preload-261206
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-261206 -n no-preload-261206: exit status 2 (377.435574ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-261206 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-261206 -n no-preload-261206
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-261206 -n no-preload-261206
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (77.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-387647 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-387647 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m17.655763808s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (77.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-387647 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [890fdf2e-4ba8-40e4-a397-1f3c4e4b33a6] Pending
helpers_test.go:344: "busybox" [890fdf2e-4ba8-40e4-a397-1f3c4e4b33a6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [890fdf2e-4ba8-40e4-a397-1f3c4e4b33a6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004353557s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-387647 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-387647 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-387647 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.118257502s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-387647 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-387647 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-387647 --alsologtostderr -v=3: (12.053788355s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-387647 -n default-k8s-diff-port-387647
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-387647 -n default-k8s-diff-port-387647: exit status 7 (95.864627ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-387647 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (605.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-387647 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0103 20:56:02.279160  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
E0103 20:56:17.848213  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/old-k8s-version-603571/client.crt: no such file or directory
E0103 20:56:45.530371  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/old-k8s-version-603571/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-387647 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (10m4.812226567s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-387647 -n default-k8s-diff-port-387647
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (605.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jkdl4" [4836ad2b-a8f1-4824-9652-d167552f1961] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jkdl4" [4836ad2b-a8f1-4824-9652-d167552f1961] Running
E0103 20:57:04.569008  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.003898654s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jkdl4" [4836ad2b-a8f1-4824-9652-d167552f1961] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003760901s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-089478 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-089478 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-089478 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-089478 -n embed-certs-089478
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-089478 -n embed-certs-089478: exit status 2 (366.795082ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-089478 -n embed-certs-089478
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-089478 -n embed-certs-089478: exit status 2 (419.288726ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-089478 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-089478 -n embed-certs-089478
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-089478 -n embed-certs-089478
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-688060 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-688060 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (47.743462123s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-688060 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-688060 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.010145557s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-688060 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-688060 --alsologtostderr -v=3: (1.307683495s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-688060 -n newest-cni-688060
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-688060 -n newest-cni-688060: exit status 7 (104.773904ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-688060 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-688060 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0103 20:58:15.744088  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/no-preload-261206/client.crt: no such file or directory
E0103 20:58:15.749327  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/no-preload-261206/client.crt: no such file or directory
E0103 20:58:15.759563  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/no-preload-261206/client.crt: no such file or directory
E0103 20:58:15.779884  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/no-preload-261206/client.crt: no such file or directory
E0103 20:58:15.820202  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/no-preload-261206/client.crt: no such file or directory
E0103 20:58:15.901249  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/no-preload-261206/client.crt: no such file or directory
E0103 20:58:16.061381  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/no-preload-261206/client.crt: no such file or directory
E0103 20:58:16.382187  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/no-preload-261206/client.crt: no such file or directory
E0103 20:58:17.022577  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/no-preload-261206/client.crt: no such file or directory
E0103 20:58:18.302978  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/no-preload-261206/client.crt: no such file or directory
E0103 20:58:20.863778  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/no-preload-261206/client.crt: no such file or directory
E0103 20:58:25.984083  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/no-preload-261206/client.crt: no such file or directory
E0103 20:58:36.224659  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/no-preload-261206/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-688060 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (29.791439097s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-688060 -n newest-cni-688060
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-688060 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-688060 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-688060 -n newest-cni-688060
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-688060 -n newest-cni-688060: exit status 2 (479.30549ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-688060 -n newest-cni-688060
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-688060 -n newest-cni-688060: exit status 2 (378.498565ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-688060 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-688060 -n newest-cni-688060
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-688060 -n newest-cni-688060
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (74.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-942650 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0103 20:58:54.558761  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
E0103 20:58:56.705609  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/no-preload-261206/client.crt: no such file or directory
E0103 20:59:11.514669  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
E0103 20:59:37.666726  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/no-preload-261206/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-942650 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m14.207275741s)
--- PASS: TestNetworkPlugins/group/auto/Start (74.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-942650 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-942650 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gtnqq" [c5290846-7aeb-4c56-9b1b-816f4c64fc3f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gtnqq" [c5290846-7aeb-4c56-9b1b-816f4c64fc3f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003372761s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-942650 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-942650 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-942650 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (62.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-942650 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0103 21:00:59.586893  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/no-preload-261206/client.crt: no such file or directory
E0103 21:01:02.278827  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
E0103 21:01:17.848143  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/old-k8s-version-603571/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-942650 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m2.932083547s)
--- PASS: TestNetworkPlugins/group/flannel/Start (62.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-zmzmx" [9b5dd7d8-af2e-4384-99f3-27fdaca96db7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003674213s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-942650 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-942650 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fqbz6" [103042c5-8aa3-4d80-a4c0-10541611ca9a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0103 21:01:47.612739  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-fqbz6" [103042c5-8aa3-4d80-a4c0-10541611ca9a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004170803s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-942650 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-942650 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-942650 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (69.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-942650 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0103 21:03:15.744836  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/no-preload-261206/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-942650 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m9.836021447s)
--- PASS: TestNetworkPlugins/group/calico/Start (69.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-282b7" [67ff5264-8f5e-4242-903a-5e26b0c75ad1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004902464s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-942650 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-942650 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nn9cx" [8709f13b-cc56-4812-8638-235d81b118ac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-nn9cx" [8709f13b-cc56-4812-8638-235d81b118ac] Running
E0103 21:03:43.427163  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/no-preload-261206/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004923124s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-942650 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-942650 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-942650 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (66.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-942650 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0103 21:05:01.784517  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/auto-942650/client.crt: no such file or directory
E0103 21:05:01.789795  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/auto-942650/client.crt: no such file or directory
E0103 21:05:01.800270  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/auto-942650/client.crt: no such file or directory
E0103 21:05:01.820537  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/auto-942650/client.crt: no such file or directory
E0103 21:05:01.861243  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/auto-942650/client.crt: no such file or directory
E0103 21:05:01.941491  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/auto-942650/client.crt: no such file or directory
E0103 21:05:02.101787  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/auto-942650/client.crt: no such file or directory
E0103 21:05:02.422167  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/auto-942650/client.crt: no such file or directory
E0103 21:05:03.063009  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/auto-942650/client.crt: no such file or directory
E0103 21:05:04.344135  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/auto-942650/client.crt: no such file or directory
E0103 21:05:06.904355  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/auto-942650/client.crt: no such file or directory
E0103 21:05:12.025074  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/auto-942650/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-942650 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m6.393444518s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (66.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-942650 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-942650 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-j945v" [e8d841fa-a2e7-4280-b4fd-abcbeb4f3df5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0103 21:05:22.265800  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/auto-942650/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-j945v" [e8d841fa-a2e7-4280-b4fd-abcbeb4f3df5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.00412009s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-942650 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-942650 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-942650 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (75.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-942650 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0103 21:06:02.279200  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-942650 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m15.82974355s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (75.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zpt2k" [db938ec2-4c2c-4b89-9f8b-778085338fb8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004070584s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zpt2k" [db938ec2-4c2c-4b89-9f8b-778085338fb8] Running
E0103 21:06:17.847938  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/old-k8s-version-603571/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004235161s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-387647 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-387647 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-387647 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-387647 --alsologtostderr -v=1: (1.245601179s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-387647 -n default-k8s-diff-port-387647
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-387647 -n default-k8s-diff-port-387647: exit status 2 (450.33242ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-387647 -n default-k8s-diff-port-387647
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-387647 -n default-k8s-diff-port-387647: exit status 2 (392.849764ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-387647 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-387647 -n default-k8s-diff-port-387647
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-387647 -n default-k8s-diff-port-387647
E0103 21:06:23.707285  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/auto-942650/client.crt: no such file or directory
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.52s)
E0103 21:08:35.179437  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/calico-942650/client.crt: no such file or directory
E0103 21:08:40.299880  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/calico-942650/client.crt: no such file or directory
E0103 21:08:50.540657  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/calico-942650/client.crt: no such file or directory
E0103 21:09:11.021297  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/calico-942650/client.crt: no such file or directory
E0103 21:09:11.514610  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/functional-155561/client.crt: no such file or directory
E0103 21:09:21.525655  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/flannel-942650/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (85.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-942650 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0103 21:06:37.681148  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/flannel-942650/client.crt: no such file or directory
E0103 21:06:37.686614  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/flannel-942650/client.crt: no such file or directory
E0103 21:06:37.696845  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/flannel-942650/client.crt: no such file or directory
E0103 21:06:37.718630  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/flannel-942650/client.crt: no such file or directory
E0103 21:06:37.759355  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/flannel-942650/client.crt: no such file or directory
E0103 21:06:37.839635  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/flannel-942650/client.crt: no such file or directory
E0103 21:06:38.000479  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/flannel-942650/client.crt: no such file or directory
E0103 21:06:38.321541  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/flannel-942650/client.crt: no such file or directory
E0103 21:06:38.961865  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/flannel-942650/client.crt: no such file or directory
E0103 21:06:40.242473  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/flannel-942650/client.crt: no such file or directory
E0103 21:06:42.802663  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/flannel-942650/client.crt: no such file or directory
E0103 21:06:47.923547  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/flannel-942650/client.crt: no such file or directory
E0103 21:06:58.164229  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/flannel-942650/client.crt: no such file or directory
E0103 21:07:04.568246  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/ingress-addon-legacy-480050/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-942650 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m25.340662614s)
--- PASS: TestNetworkPlugins/group/bridge/Start (85.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rkbqn" [28f717f8-fc6a-4c94-adbf-fae55a1d8eac] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00404095s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-942650 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-942650 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-f9t59" [9d3051a3-9c98-47d5-ab4e-3b344a9b41f7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0103 21:07:18.644494  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/flannel-942650/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-f9t59" [9d3051a3-9c98-47d5-ab4e-3b344a9b41f7] Running
E0103 21:07:25.330111  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/addons-845596/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004198111s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-942650 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-942650 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-942650 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (93.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-942650 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-942650 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m33.059010648s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (93.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-942650 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-942650 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8hnjg" [49b189bd-4f6a-4dc9-b399-317296d18a9d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0103 21:07:59.604956  414763 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17885-409390/.minikube/profiles/flannel-942650/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-8hnjg" [49b189bd-4f6a-4dc9-b399-317296d18a9d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004552872s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-942650 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-942650 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-942650 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-942650 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-942650 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lgqtt" [846b5eac-16e4-455c-b623-bf3e4e97eed9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lgqtt" [846b5eac-16e4-455c-b623-bf3e4e97eed9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003651834s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-942650 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-942650 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-942650 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    

Test skip (32/310)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.65s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-213592 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-213592" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-213592
--- SKIP: TestDownloadOnlyKic (0.65s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1786: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-001607" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-001607
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-942650 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-942650

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-942650

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-942650

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-942650

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-942650

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-942650

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-942650

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-942650

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-942650

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-942650

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-942650

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-942650" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-942650" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-942650

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-942650"

                                                
                                                
----------------------- debugLogs end: kubenet-942650 [took: 5.215558348s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-942650" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-942650
--- SKIP: TestNetworkPlugins/group/kubenet (5.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-942650 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-942650

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-942650

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-942650

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-942650

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-942650

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-942650

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-942650

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-942650

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-942650

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-942650

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-942650

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-942650" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-942650

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-942650

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-942650

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-942650

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-942650" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-942650" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-942650

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-942650" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-942650"

                                                
                                                
----------------------- debugLogs end: cilium-942650 [took: 6.146770252s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-942650" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-942650
--- SKIP: TestNetworkPlugins/group/cilium (6.42s)

                                                
                                    
Copied to clipboard