Test Report: Docker_Linux_crio_arm64 17206

                    
                      f478b3e95ad7f4002b1f24747b20ea33f6e08bc3:2023-11-28:32057
                    
                

Test fail (7/314)

Order failed test Duration
35 TestAddons/parallel/Ingress 168.81
166 TestIngressAddonLegacy/serial/ValidateIngressAddons 181.13
216 TestMultiNode/serial/PingHostFrom2Pods 4.35
233 TestScheduledStopUnix 34.06
237 TestRunningBinaryUpgrade 96.64
240 TestMissingContainerUpgrade 464.39
252 TestStoppedBinaryUpgrade/Upgrade 2069.2
x
+
TestAddons/parallel/Ingress (168.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-606180 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-606180 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-606180 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [cc8a21fd-9568-4bef-979a-f291ba746a9c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [cc8a21fd-9568-4bef-979a-f291ba746a9c] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.015371011s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p addons-606180 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-606180 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.874647011s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-606180 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p addons-606180 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.047951021s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p addons-606180 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p addons-606180 addons disable ingress-dns --alsologtostderr -v=1: (1.407774539s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p addons-606180 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p addons-606180 addons disable ingress --alsologtostderr -v=1: (7.779820568s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-606180
helpers_test.go:235: (dbg) docker inspect addons-606180:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "19915e5831eec0251ac0713929a7905f869dc7efac21d28c7e3bb5ff67459bb6",
	        "Created": "2023-11-27T23:30:54.67166163Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1461705,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-27T23:30:55.018689662Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6b2707af759835616662ab4511fa7cfc968ed5500b8f30a5d231d9af64582310",
	        "ResolvConfPath": "/var/lib/docker/containers/19915e5831eec0251ac0713929a7905f869dc7efac21d28c7e3bb5ff67459bb6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/19915e5831eec0251ac0713929a7905f869dc7efac21d28c7e3bb5ff67459bb6/hostname",
	        "HostsPath": "/var/lib/docker/containers/19915e5831eec0251ac0713929a7905f869dc7efac21d28c7e3bb5ff67459bb6/hosts",
	        "LogPath": "/var/lib/docker/containers/19915e5831eec0251ac0713929a7905f869dc7efac21d28c7e3bb5ff67459bb6/19915e5831eec0251ac0713929a7905f869dc7efac21d28c7e3bb5ff67459bb6-json.log",
	        "Name": "/addons-606180",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-606180:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-606180",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f892d6571e57da873661d594278dddd65b1949cbc13a51c27a8c01072b84dbc5-init/diff:/var/lib/docker/overlay2/66e18f6b92e8847ad9065a2bde54888b27c493e8cb472385d095e2aee2f57672/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f892d6571e57da873661d594278dddd65b1949cbc13a51c27a8c01072b84dbc5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f892d6571e57da873661d594278dddd65b1949cbc13a51c27a8c01072b84dbc5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f892d6571e57da873661d594278dddd65b1949cbc13a51c27a8c01072b84dbc5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-606180",
	                "Source": "/var/lib/docker/volumes/addons-606180/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-606180",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-606180",
	                "name.minikube.sigs.k8s.io": "addons-606180",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "362abb27ceb5265e466e0a8ca7b9e34f3784edd5eae55a7af949fe5f6d31b0d3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34069"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34068"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34065"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34067"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34066"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/362abb27ceb5",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-606180": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "19915e5831ee",
	                        "addons-606180"
	                    ],
	                    "NetworkID": "30678591814ae9280d0e33e670e21d0ab629740c942e7e2d41e721720ab70e1d",
	                    "EndpointID": "2576858c9652177e47f9a695c6c9955fbebe8c9a11f5e9f1abf80f7feec75cc5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-606180 -n addons-606180
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-606180 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-606180 logs -n 25: (1.593373944s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 27 Nov 23 23:30 UTC | 27 Nov 23 23:30 UTC |
	| delete  | -p download-only-717158                                                                     | download-only-717158   | jenkins | v1.32.0 | 27 Nov 23 23:30 UTC | 27 Nov 23 23:30 UTC |
	| delete  | -p download-only-717158                                                                     | download-only-717158   | jenkins | v1.32.0 | 27 Nov 23 23:30 UTC | 27 Nov 23 23:30 UTC |
	| start   | --download-only -p                                                                          | download-docker-108856 | jenkins | v1.32.0 | 27 Nov 23 23:30 UTC |                     |
	|         | download-docker-108856                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-108856                                                                   | download-docker-108856 | jenkins | v1.32.0 | 27 Nov 23 23:30 UTC | 27 Nov 23 23:30 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-434652   | jenkins | v1.32.0 | 27 Nov 23 23:30 UTC |                     |
	|         | binary-mirror-434652                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33683                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-434652                                                                     | binary-mirror-434652   | jenkins | v1.32.0 | 27 Nov 23 23:30 UTC | 27 Nov 23 23:30 UTC |
	| addons  | enable dashboard -p                                                                         | addons-606180          | jenkins | v1.32.0 | 27 Nov 23 23:30 UTC |                     |
	|         | addons-606180                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-606180          | jenkins | v1.32.0 | 27 Nov 23 23:30 UTC |                     |
	|         | addons-606180                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-606180 --wait=true                                                                | addons-606180          | jenkins | v1.32.0 | 27 Nov 23 23:30 UTC | 27 Nov 23 23:33 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-606180 ip                                                                            | addons-606180          | jenkins | v1.32.0 | 27 Nov 23 23:33 UTC | 27 Nov 23 23:33 UTC |
	| addons  | addons-606180 addons disable                                                                | addons-606180          | jenkins | v1.32.0 | 27 Nov 23 23:33 UTC | 27 Nov 23 23:33 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-606180          | jenkins | v1.32.0 | 27 Nov 23 23:33 UTC | 27 Nov 23 23:33 UTC |
	|         | -p addons-606180                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-606180 ssh cat                                                                       | addons-606180          | jenkins | v1.32.0 | 27 Nov 23 23:33 UTC | 27 Nov 23 23:33 UTC |
	|         | /opt/local-path-provisioner/pvc-92fea970-b462-46c9-a754-48da8038f828_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-606180 addons disable                                                                | addons-606180          | jenkins | v1.32.0 | 27 Nov 23 23:33 UTC | 27 Nov 23 23:34 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-606180 addons                                                                        | addons-606180          | jenkins | v1.32.0 | 27 Nov 23 23:33 UTC | 27 Nov 23 23:34 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-606180 addons                                                                        | addons-606180          | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:34 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-606180          | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:34 UTC |
	|         | addons-606180                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-606180          | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:34 UTC |
	|         | -p addons-606180                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-606180 addons                                                                        | addons-606180          | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:34 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-606180          | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:34 UTC |
	|         | addons-606180                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-606180 ssh curl -s                                                                   | addons-606180          | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-606180 ip                                                                            | addons-606180          | jenkins | v1.32.0 | 27 Nov 23 23:36 UTC | 27 Nov 23 23:36 UTC |
	| addons  | addons-606180 addons disable                                                                | addons-606180          | jenkins | v1.32.0 | 27 Nov 23 23:37 UTC | 27 Nov 23 23:37 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-606180 addons disable                                                                | addons-606180          | jenkins | v1.32.0 | 27 Nov 23 23:37 UTC | 27 Nov 23 23:37 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 23:30:30
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 23:30:30.791120 1461234 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:30:30.791313 1461234 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:30:30.791324 1461234 out.go:309] Setting ErrFile to fd 2...
	I1127 23:30:30.791330 1461234 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:30:30.791576 1461234 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-1455288/.minikube/bin
	I1127 23:30:30.792023 1461234 out.go:303] Setting JSON to false
	I1127 23:30:30.793065 1461234 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22380,"bootTime":1701105451,"procs":325,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1127 23:30:30.793143 1461234 start.go:138] virtualization:  
	I1127 23:30:30.795404 1461234 out.go:177] * [addons-606180] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1127 23:30:30.797502 1461234 out.go:177]   - MINIKUBE_LOCATION=17206
	I1127 23:30:30.799604 1461234 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:30:30.797652 1461234 notify.go:220] Checking for updates...
	I1127 23:30:30.803600 1461234 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-1455288/kubeconfig
	I1127 23:30:30.805310 1461234 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube
	I1127 23:30:30.807319 1461234 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1127 23:30:30.808945 1461234 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 23:30:30.810838 1461234 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:30:30.837960 1461234 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:30:30.838087 1461234 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:30:30.929740 1461234 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-11-27 23:30:30.920410416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:30:30.929846 1461234 docker.go:295] overlay module found
	I1127 23:30:30.937479 1461234 out.go:177] * Using the docker driver based on user configuration
	I1127 23:30:30.939402 1461234 start.go:298] selected driver: docker
	I1127 23:30:30.939420 1461234 start.go:902] validating driver "docker" against <nil>
	I1127 23:30:30.939442 1461234 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 23:30:30.940122 1461234 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:30:31.007577 1461234 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-11-27 23:30:30.99526309 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:30:31.007751 1461234 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1127 23:30:31.007999 1461234 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1127 23:30:31.010276 1461234 out.go:177] * Using Docker driver with root privileges
	I1127 23:30:31.012555 1461234 cni.go:84] Creating CNI manager for ""
	I1127 23:30:31.012584 1461234 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 23:30:31.012597 1461234 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1127 23:30:31.012609 1461234 start_flags.go:323] config:
	{Name:addons-606180 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-606180 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:30:31.016471 1461234 out.go:177] * Starting control plane node addons-606180 in cluster addons-606180
	I1127 23:30:31.019287 1461234 cache.go:121] Beginning downloading kic base image for docker with crio
	I1127 23:30:31.021504 1461234 out.go:177] * Pulling base image ...
	I1127 23:30:31.023449 1461234 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:30:31.023512 1461234 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1127 23:30:31.023540 1461234 cache.go:56] Caching tarball of preloaded images
	I1127 23:30:31.023539 1461234 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 23:30:31.023648 1461234 preload.go:174] Found /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1127 23:30:31.023658 1461234 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1127 23:30:31.024038 1461234 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/config.json ...
	I1127 23:30:31.024058 1461234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/config.json: {Name:mk98aa3562ae13c067bef3d093e820bfca183c6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:30:31.041437 1461234 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1127 23:30:31.041582 1461234 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory
	I1127 23:30:31.041615 1461234 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory, skipping pull
	I1127 23:30:31.041623 1461234 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in cache, skipping pull
	I1127 23:30:31.041631 1461234 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 as a tarball
	I1127 23:30:31.041636 1461234 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 from local cache
	I1127 23:30:47.102380 1461234 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 from cached tarball
	I1127 23:30:47.102421 1461234 cache.go:194] Successfully downloaded all kic artifacts
	I1127 23:30:47.102471 1461234 start.go:365] acquiring machines lock for addons-606180: {Name:mkb9f3d9cf320f6f0ea243d6e45ddf7a40419817 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:30:47.103184 1461234 start.go:369] acquired machines lock for "addons-606180" in 685.418µs
	I1127 23:30:47.103230 1461234 start.go:93] Provisioning new machine with config: &{Name:addons-606180 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-606180 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1127 23:30:47.103322 1461234 start.go:125] createHost starting for "" (driver="docker")
	I1127 23:30:47.105657 1461234 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1127 23:30:47.105955 1461234 start.go:159] libmachine.API.Create for "addons-606180" (driver="docker")
	I1127 23:30:47.105988 1461234 client.go:168] LocalClient.Create starting
	I1127 23:30:47.106115 1461234 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem
	I1127 23:30:47.424605 1461234 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/cert.pem
	I1127 23:30:47.871863 1461234 cli_runner.go:164] Run: docker network inspect addons-606180 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1127 23:30:47.889155 1461234 cli_runner.go:211] docker network inspect addons-606180 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1127 23:30:47.889238 1461234 network_create.go:281] running [docker network inspect addons-606180] to gather additional debugging logs...
	I1127 23:30:47.889260 1461234 cli_runner.go:164] Run: docker network inspect addons-606180
	W1127 23:30:47.907367 1461234 cli_runner.go:211] docker network inspect addons-606180 returned with exit code 1
	I1127 23:30:47.907420 1461234 network_create.go:284] error running [docker network inspect addons-606180]: docker network inspect addons-606180: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-606180 not found
	I1127 23:30:47.907435 1461234 network_create.go:286] output of [docker network inspect addons-606180]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-606180 not found
	
	** /stderr **
	I1127 23:30:47.907548 1461234 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 23:30:47.925966 1461234 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400049bf60}
	I1127 23:30:47.926008 1461234 network_create.go:124] attempt to create docker network addons-606180 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1127 23:30:47.926071 1461234 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-606180 addons-606180
	I1127 23:30:48.011358 1461234 network_create.go:108] docker network addons-606180 192.168.49.0/24 created
	I1127 23:30:48.011395 1461234 kic.go:121] calculated static IP "192.168.49.2" for the "addons-606180" container
	I1127 23:30:48.011507 1461234 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1127 23:30:48.034655 1461234 cli_runner.go:164] Run: docker volume create addons-606180 --label name.minikube.sigs.k8s.io=addons-606180 --label created_by.minikube.sigs.k8s.io=true
	I1127 23:30:48.055276 1461234 oci.go:103] Successfully created a docker volume addons-606180
	I1127 23:30:48.055383 1461234 cli_runner.go:164] Run: docker run --rm --name addons-606180-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-606180 --entrypoint /usr/bin/test -v addons-606180:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1127 23:30:50.257930 1461234 cli_runner.go:217] Completed: docker run --rm --name addons-606180-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-606180 --entrypoint /usr/bin/test -v addons-606180:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib: (2.202505195s)
	I1127 23:30:50.257966 1461234 oci.go:107] Successfully prepared a docker volume addons-606180
	I1127 23:30:50.257997 1461234 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:30:50.258019 1461234 kic.go:194] Starting extracting preloaded images to volume ...
	I1127 23:30:50.258104 1461234 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-606180:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1127 23:30:54.587871 1461234 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-606180:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir: (4.329723742s)
	I1127 23:30:54.587905 1461234 kic.go:203] duration metric: took 4.329882 seconds to extract preloaded images to volume
	W1127 23:30:54.588054 1461234 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1127 23:30:54.588176 1461234 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1127 23:30:54.655562 1461234 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-606180 --name addons-606180 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-606180 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-606180 --network addons-606180 --ip 192.168.49.2 --volume addons-606180:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1127 23:30:55.027735 1461234 cli_runner.go:164] Run: docker container inspect addons-606180 --format={{.State.Running}}
	I1127 23:30:55.063079 1461234 cli_runner.go:164] Run: docker container inspect addons-606180 --format={{.State.Status}}
	I1127 23:30:55.091226 1461234 cli_runner.go:164] Run: docker exec addons-606180 stat /var/lib/dpkg/alternatives/iptables
	I1127 23:30:55.183506 1461234 oci.go:144] the created container "addons-606180" has a running status.
	I1127 23:30:55.183541 1461234 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/addons-606180/id_rsa...
	I1127 23:30:55.437058 1461234 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/addons-606180/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1127 23:30:55.475596 1461234 cli_runner.go:164] Run: docker container inspect addons-606180 --format={{.State.Status}}
	I1127 23:30:55.494526 1461234 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1127 23:30:55.494549 1461234 kic_runner.go:114] Args: [docker exec --privileged addons-606180 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1127 23:30:55.568847 1461234 cli_runner.go:164] Run: docker container inspect addons-606180 --format={{.State.Status}}
	I1127 23:30:55.611184 1461234 machine.go:88] provisioning docker machine ...
	I1127 23:30:55.611218 1461234 ubuntu.go:169] provisioning hostname "addons-606180"
	I1127 23:30:55.611285 1461234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606180
	I1127 23:30:55.635688 1461234 main.go:141] libmachine: Using SSH client type: native
	I1127 23:30:55.636109 1461234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34069 <nil> <nil>}
	I1127 23:30:55.636127 1461234 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-606180 && echo "addons-606180" | sudo tee /etc/hostname
	I1127 23:30:55.636698 1461234 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55424->127.0.0.1:34069: read: connection reset by peer
	I1127 23:30:58.782036 1461234 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-606180
	
	I1127 23:30:58.782117 1461234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606180
	I1127 23:30:58.804981 1461234 main.go:141] libmachine: Using SSH client type: native
	I1127 23:30:58.805389 1461234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34069 <nil> <nil>}
	I1127 23:30:58.805410 1461234 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-606180' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-606180/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-606180' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1127 23:30:58.935167 1461234 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 23:30:58.935196 1461234 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17206-1455288/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-1455288/.minikube}
	I1127 23:30:58.935226 1461234 ubuntu.go:177] setting up certificates
	I1127 23:30:58.935241 1461234 provision.go:83] configureAuth start
	I1127 23:30:58.935307 1461234 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-606180
	I1127 23:30:58.955138 1461234 provision.go:138] copyHostCerts
	I1127 23:30:58.955216 1461234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.pem (1078 bytes)
	I1127 23:30:58.955353 1461234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-1455288/.minikube/cert.pem (1123 bytes)
	I1127 23:30:58.955420 1461234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-1455288/.minikube/key.pem (1679 bytes)
	I1127 23:30:58.955478 1461234 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca-key.pem org=jenkins.addons-606180 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-606180]
	I1127 23:30:59.348479 1461234 provision.go:172] copyRemoteCerts
	I1127 23:30:59.348549 1461234 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1127 23:30:59.348593 1461234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606180
	I1127 23:30:59.366460 1461234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34069 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/addons-606180/id_rsa Username:docker}
	I1127 23:30:59.460467 1461234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1127 23:30:59.488119 1461234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1127 23:30:59.516482 1461234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1127 23:30:59.545668 1461234 provision.go:86] duration metric: configureAuth took 610.405872ms
	I1127 23:30:59.545695 1461234 ubuntu.go:193] setting minikube options for container-runtime
	I1127 23:30:59.545962 1461234 config.go:182] Loaded profile config "addons-606180": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:30:59.546077 1461234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606180
	I1127 23:30:59.564072 1461234 main.go:141] libmachine: Using SSH client type: native
	I1127 23:30:59.564496 1461234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34069 <nil> <nil>}
	I1127 23:30:59.564517 1461234 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1127 23:30:59.807618 1461234 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1127 23:30:59.807642 1461234 machine.go:91] provisioned docker machine in 4.196435234s
	I1127 23:30:59.807653 1461234 client.go:171] LocalClient.Create took 12.701658748s
	I1127 23:30:59.807663 1461234 start.go:167] duration metric: libmachine.API.Create for "addons-606180" took 12.70170921s
	I1127 23:30:59.807671 1461234 start.go:300] post-start starting for "addons-606180" (driver="docker")
	I1127 23:30:59.807681 1461234 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1127 23:30:59.807749 1461234 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1127 23:30:59.807799 1461234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606180
	I1127 23:30:59.825956 1461234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34069 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/addons-606180/id_rsa Username:docker}
	I1127 23:30:59.920858 1461234 ssh_runner.go:195] Run: cat /etc/os-release
	I1127 23:30:59.925072 1461234 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1127 23:30:59.925122 1461234 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1127 23:30:59.925139 1461234 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1127 23:30:59.925147 1461234 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1127 23:30:59.925165 1461234 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-1455288/.minikube/addons for local assets ...
	I1127 23:30:59.925244 1461234 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-1455288/.minikube/files for local assets ...
	I1127 23:30:59.925270 1461234 start.go:303] post-start completed in 117.593434ms
	I1127 23:30:59.925585 1461234 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-606180
	I1127 23:30:59.942956 1461234 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/config.json ...
	I1127 23:30:59.943259 1461234 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 23:30:59.943318 1461234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606180
	I1127 23:30:59.961012 1461234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34069 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/addons-606180/id_rsa Username:docker}
	I1127 23:31:00.091247 1461234 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1127 23:31:00.115085 1461234 start.go:128] duration metric: createHost completed in 13.011742204s
	I1127 23:31:00.115164 1461234 start.go:83] releasing machines lock for "addons-606180", held for 13.011955174s
	I1127 23:31:00.115622 1461234 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-606180
	I1127 23:31:00.207455 1461234 ssh_runner.go:195] Run: cat /version.json
	I1127 23:31:00.207525 1461234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606180
	I1127 23:31:00.208781 1461234 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1127 23:31:00.208914 1461234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606180
	I1127 23:31:00.253624 1461234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34069 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/addons-606180/id_rsa Username:docker}
	I1127 23:31:00.258709 1461234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34069 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/addons-606180/id_rsa Username:docker}
	I1127 23:31:00.513310 1461234 ssh_runner.go:195] Run: systemctl --version
	I1127 23:31:00.518990 1461234 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1127 23:31:00.666694 1461234 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1127 23:31:00.672416 1461234 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 23:31:00.699285 1461234 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1127 23:31:00.699395 1461234 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 23:31:00.737986 1461234 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1127 23:31:00.738015 1461234 start.go:472] detecting cgroup driver to use...
	I1127 23:31:00.738052 1461234 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1127 23:31:00.738108 1461234 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1127 23:31:00.756310 1461234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1127 23:31:00.770247 1461234 docker.go:203] disabling cri-docker service (if available) ...
	I1127 23:31:00.770316 1461234 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1127 23:31:00.786290 1461234 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1127 23:31:00.802960 1461234 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1127 23:31:00.904456 1461234 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1127 23:31:01.006428 1461234 docker.go:219] disabling docker service ...
	I1127 23:31:01.006504 1461234 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1127 23:31:01.028779 1461234 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1127 23:31:01.042733 1461234 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1127 23:31:01.141492 1461234 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1127 23:31:01.280032 1461234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1127 23:31:01.295554 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 23:31:01.316857 1461234 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1127 23:31:01.316926 1461234 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:31:01.329083 1461234 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1127 23:31:01.329153 1461234 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:31:01.341267 1461234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:31:01.353961 1461234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:31:01.365992 1461234 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1127 23:31:01.377229 1461234 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1127 23:31:01.388033 1461234 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1127 23:31:01.398390 1461234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 23:31:01.491933 1461234 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1127 23:31:01.617016 1461234 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1127 23:31:01.617182 1461234 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1127 23:31:01.621993 1461234 start.go:540] Will wait 60s for crictl version
	I1127 23:31:01.622102 1461234 ssh_runner.go:195] Run: which crictl
	I1127 23:31:01.626431 1461234 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1127 23:31:01.669425 1461234 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1127 23:31:01.669528 1461234 ssh_runner.go:195] Run: crio --version
	I1127 23:31:01.718159 1461234 ssh_runner.go:195] Run: crio --version
	I1127 23:31:01.769621 1461234 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1127 23:31:01.771944 1461234 cli_runner.go:164] Run: docker network inspect addons-606180 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 23:31:01.788812 1461234 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1127 23:31:01.793518 1461234 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:31:01.806763 1461234 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:31:01.806833 1461234 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 23:31:01.880618 1461234 crio.go:496] all images are preloaded for cri-o runtime.
	I1127 23:31:01.880641 1461234 crio.go:415] Images already preloaded, skipping extraction
	I1127 23:31:01.880700 1461234 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 23:31:01.923565 1461234 crio.go:496] all images are preloaded for cri-o runtime.
	I1127 23:31:01.923588 1461234 cache_images.go:84] Images are preloaded, skipping loading
	I1127 23:31:01.923691 1461234 ssh_runner.go:195] Run: crio config
	I1127 23:31:01.978858 1461234 cni.go:84] Creating CNI manager for ""
	I1127 23:31:01.978882 1461234 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 23:31:01.978916 1461234 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1127 23:31:01.978939 1461234 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-606180 NodeName:addons-606180 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1127 23:31:01.979076 1461234 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-606180"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1127 23:31:01.979139 1461234 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-606180 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-606180 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1127 23:31:01.979209 1461234 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1127 23:31:01.990269 1461234 binaries.go:44] Found k8s binaries, skipping transfer
	I1127 23:31:01.990350 1461234 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1127 23:31:02.002319 1461234 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1127 23:31:02.027597 1461234 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1127 23:31:02.048681 1461234 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1127 23:31:02.069250 1461234 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1127 23:31:02.073745 1461234 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:31:02.086896 1461234 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180 for IP: 192.168.49.2
	I1127 23:31:02.086968 1461234 certs.go:190] acquiring lock for shared ca certs: {Name:mk268ef230412b241734813f303d69d9b36c42ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:31:02.087645 1461234 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.key
	I1127 23:31:02.297233 1461234 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt ...
	I1127 23:31:02.297263 1461234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt: {Name:mk4b20127df581514b3c4d24b157879416814927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:31:02.297450 1461234 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.key ...
	I1127 23:31:02.297469 1461234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.key: {Name:mk7cddb679d86b9497de06ce8a9c2874103a1a80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:31:02.297560 1461234 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17206-1455288/.minikube/proxy-client-ca.key
	I1127 23:31:02.588889 1461234 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-1455288/.minikube/proxy-client-ca.crt ...
	I1127 23:31:02.588923 1461234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/proxy-client-ca.crt: {Name:mk87ab8f97cb6909c6a81d9bde94a80918683365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:31:02.589730 1461234 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-1455288/.minikube/proxy-client-ca.key ...
	I1127 23:31:02.589748 1461234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/proxy-client-ca.key: {Name:mk29c610283f6f8aff9da8a9370d843a246c6566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:31:02.589915 1461234 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.key
	I1127 23:31:02.589937 1461234 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt with IP's: []
	I1127 23:31:02.791271 1461234 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt ...
	I1127 23:31:02.791303 1461234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: {Name:mk93d5c1c5e5949c6d70d91c0abf801330bece20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:31:02.791491 1461234 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.key ...
	I1127 23:31:02.791503 1461234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.key: {Name:mk01e263fe753af40f8504ea71e90212144eb7fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:31:02.792203 1461234 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/apiserver.key.dd3b5fb2
	I1127 23:31:02.792226 1461234 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1127 23:31:03.966177 1461234 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/apiserver.crt.dd3b5fb2 ...
	I1127 23:31:03.966214 1461234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/apiserver.crt.dd3b5fb2: {Name:mk3c7b887419259873ae5b55696978e1b8f27414 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:31:03.966439 1461234 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/apiserver.key.dd3b5fb2 ...
	I1127 23:31:03.966457 1461234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/apiserver.key.dd3b5fb2: {Name:mk0fd30c8726b6c53e74f9c237301c1515a1dfd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:31:03.967100 1461234 certs.go:337] copying /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/apiserver.crt
	I1127 23:31:03.967183 1461234 certs.go:341] copying /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/apiserver.key
	I1127 23:31:03.967245 1461234 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/proxy-client.key
	I1127 23:31:03.967270 1461234 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/proxy-client.crt with IP's: []
	I1127 23:31:04.156706 1461234 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/proxy-client.crt ...
	I1127 23:31:04.156737 1461234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/proxy-client.crt: {Name:mk7cad928c7c7fa64cb618b58286412aa6c38c94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:31:04.156931 1461234 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/proxy-client.key ...
	I1127 23:31:04.156946 1461234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/proxy-client.key: {Name:mkd37f8afa92ebc544e0ec8aedfb242a8a06e84b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:31:04.157770 1461234 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca-key.pem (1679 bytes)
	I1127 23:31:04.157821 1461234 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem (1078 bytes)
	I1127 23:31:04.157874 1461234 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/cert.pem (1123 bytes)
	I1127 23:31:04.157904 1461234 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/key.pem (1679 bytes)
	I1127 23:31:04.158512 1461234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1127 23:31:04.188732 1461234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1127 23:31:04.218446 1461234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1127 23:31:04.247023 1461234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1127 23:31:04.275827 1461234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1127 23:31:04.304614 1461234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1127 23:31:04.332479 1461234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1127 23:31:04.360499 1461234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1127 23:31:04.389075 1461234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1127 23:31:04.417104 1461234 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1127 23:31:04.438045 1461234 ssh_runner.go:195] Run: openssl version
	I1127 23:31:04.445023 1461234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1127 23:31:04.456589 1461234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:31:04.461265 1461234 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:31 /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:31:04.461340 1461234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:31:04.469689 1461234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1127 23:31:04.481610 1461234 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1127 23:31:04.485825 1461234 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 23:31:04.485893 1461234 kubeadm.go:404] StartCluster: {Name:addons-606180 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-606180 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:31:04.485968 1461234 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1127 23:31:04.486023 1461234 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1127 23:31:04.528273 1461234 cri.go:89] found id: ""
	I1127 23:31:04.528342 1461234 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1127 23:31:04.538786 1461234 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1127 23:31:04.549679 1461234 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1127 23:31:04.549745 1461234 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1127 23:31:04.560494 1461234 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1127 23:31:04.560572 1461234 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1127 23:31:04.618237 1461234 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1127 23:31:04.618637 1461234 kubeadm.go:322] [preflight] Running pre-flight checks
	I1127 23:31:04.664406 1461234 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1127 23:31:04.664477 1461234 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1050-aws
	I1127 23:31:04.664520 1461234 kubeadm.go:322] OS: Linux
	I1127 23:31:04.664566 1461234 kubeadm.go:322] CGROUPS_CPU: enabled
	I1127 23:31:04.664614 1461234 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1127 23:31:04.664662 1461234 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1127 23:31:04.664712 1461234 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1127 23:31:04.664763 1461234 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1127 23:31:04.664816 1461234 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1127 23:31:04.664861 1461234 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1127 23:31:04.664910 1461234 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1127 23:31:04.664957 1461234 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1127 23:31:04.752084 1461234 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1127 23:31:04.752254 1461234 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1127 23:31:04.752380 1461234 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1127 23:31:05.027173 1461234 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1127 23:31:05.030690 1461234 out.go:204]   - Generating certificates and keys ...
	I1127 23:31:05.030829 1461234 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1127 23:31:05.030992 1461234 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1127 23:31:05.733223 1461234 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1127 23:31:05.930097 1461234 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1127 23:31:06.489965 1461234 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1127 23:31:06.810977 1461234 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1127 23:31:07.447121 1461234 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1127 23:31:07.447276 1461234 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-606180 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1127 23:31:08.021666 1461234 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1127 23:31:08.021886 1461234 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-606180 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1127 23:31:08.993897 1461234 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1127 23:31:10.036368 1461234 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1127 23:31:10.483821 1461234 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1127 23:31:10.483999 1461234 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1127 23:31:10.807125 1461234 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1127 23:31:11.485744 1461234 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1127 23:31:11.709901 1461234 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1127 23:31:12.203528 1461234 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1127 23:31:12.204483 1461234 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1127 23:31:12.207392 1461234 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1127 23:31:12.210179 1461234 out.go:204]   - Booting up control plane ...
	I1127 23:31:12.210337 1461234 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1127 23:31:12.210417 1461234 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1127 23:31:12.210951 1461234 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1127 23:31:12.221711 1461234 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1127 23:31:12.222588 1461234 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1127 23:31:12.222889 1461234 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1127 23:31:12.328298 1461234 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1127 23:31:20.332865 1461234 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004662 seconds
	I1127 23:31:20.332989 1461234 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1127 23:31:20.360281 1461234 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1127 23:31:20.915333 1461234 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1127 23:31:20.915541 1461234 kubeadm.go:322] [mark-control-plane] Marking the node addons-606180 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1127 23:31:21.428157 1461234 kubeadm.go:322] [bootstrap-token] Using token: upi9dy.yrcavm3msp6sby15
	I1127 23:31:21.429837 1461234 out.go:204]   - Configuring RBAC rules ...
	I1127 23:31:21.429968 1461234 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1127 23:31:21.436949 1461234 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1127 23:31:21.445425 1461234 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1127 23:31:21.449217 1461234 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1127 23:31:21.453068 1461234 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1127 23:31:21.456907 1461234 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1127 23:31:21.470861 1461234 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1127 23:31:21.727227 1461234 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1127 23:31:21.842179 1461234 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1127 23:31:21.845467 1461234 kubeadm.go:322] 
	I1127 23:31:21.845541 1461234 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1127 23:31:21.845547 1461234 kubeadm.go:322] 
	I1127 23:31:21.845620 1461234 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1127 23:31:21.845625 1461234 kubeadm.go:322] 
	I1127 23:31:21.845651 1461234 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1127 23:31:21.845706 1461234 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1127 23:31:21.845771 1461234 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1127 23:31:21.845779 1461234 kubeadm.go:322] 
	I1127 23:31:21.845830 1461234 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1127 23:31:21.845835 1461234 kubeadm.go:322] 
	I1127 23:31:21.845930 1461234 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1127 23:31:21.845935 1461234 kubeadm.go:322] 
	I1127 23:31:21.845985 1461234 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1127 23:31:21.846054 1461234 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1127 23:31:21.846119 1461234 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1127 23:31:21.846124 1461234 kubeadm.go:322] 
	I1127 23:31:21.846202 1461234 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1127 23:31:21.846274 1461234 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1127 23:31:21.846279 1461234 kubeadm.go:322] 
	I1127 23:31:21.846356 1461234 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token upi9dy.yrcavm3msp6sby15 \
	I1127 23:31:21.846453 1461234 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4bf7580b26a5e006cb8545a36d546acc708a5c0d8ea7cd28dd99f58e9fcb6509 \
	I1127 23:31:21.846472 1461234 kubeadm.go:322] 	--control-plane 
	I1127 23:31:21.846478 1461234 kubeadm.go:322] 
	I1127 23:31:21.846557 1461234 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1127 23:31:21.846562 1461234 kubeadm.go:322] 
	I1127 23:31:21.846638 1461234 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token upi9dy.yrcavm3msp6sby15 \
	I1127 23:31:21.846733 1461234 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4bf7580b26a5e006cb8545a36d546acc708a5c0d8ea7cd28dd99f58e9fcb6509 
	I1127 23:31:21.848279 1461234 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1050-aws\n", err: exit status 1
	I1127 23:31:21.848387 1461234 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1127 23:31:21.848404 1461234 cni.go:84] Creating CNI manager for ""
	I1127 23:31:21.848412 1461234 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 23:31:21.851607 1461234 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1127 23:31:21.853403 1461234 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1127 23:31:21.861055 1461234 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1127 23:31:21.861076 1461234 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1127 23:31:21.892877 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1127 23:31:22.737726 1461234 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1127 23:31:22.737815 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:22.737846 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=addons-606180 minikube.k8s.io/updated_at=2023_11_27T23_31_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:22.936152 1461234 ops.go:34] apiserver oom_adj: -16
	I1127 23:31:22.936238 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:23.056660 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:23.652163 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:24.151918 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:24.651936 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:25.152076 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:25.652383 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:26.152460 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:26.652014 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:27.151683 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:27.651592 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:28.151622 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:28.652207 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:29.151549 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:29.652258 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:30.152008 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:30.652356 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:31.152029 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:31.651870 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:32.151863 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:32.651768 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:33.152520 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:33.652510 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:34.152428 1461234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:31:34.249451 1461234 kubeadm.go:1081] duration metric: took 11.511704583s to wait for elevateKubeSystemPrivileges.
	I1127 23:31:34.249482 1461234 kubeadm.go:406] StartCluster complete in 29.763592285s
	I1127 23:31:34.249504 1461234 settings.go:142] acquiring lock: {Name:mk2effde19f5a08dd61e438cec70b0751f0f2f09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:31:34.249620 1461234 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-1455288/kubeconfig
	I1127 23:31:34.250032 1461234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/kubeconfig: {Name:mk024e2b9ecd216772e0b17d0d1d16e859027716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:31:34.252230 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1127 23:31:34.252478 1461234 config.go:182] Loaded profile config "addons-606180": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:31:34.252513 1461234 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1127 23:31:34.252590 1461234 addons.go:69] Setting volumesnapshots=true in profile "addons-606180"
	I1127 23:31:34.252606 1461234 addons.go:231] Setting addon volumesnapshots=true in "addons-606180"
	I1127 23:31:34.252663 1461234 host.go:66] Checking if "addons-606180" exists ...
	I1127 23:31:34.253150 1461234 cli_runner.go:164] Run: docker container inspect addons-606180 --format={{.State.Status}}
	I1127 23:31:34.253786 1461234 addons.go:69] Setting ingress=true in profile "addons-606180"
	I1127 23:31:34.253809 1461234 addons.go:231] Setting addon ingress=true in "addons-606180"
	I1127 23:31:34.253861 1461234 host.go:66] Checking if "addons-606180" exists ...
	I1127 23:31:34.254276 1461234 cli_runner.go:164] Run: docker container inspect addons-606180 --format={{.State.Status}}
	I1127 23:31:34.254576 1461234 addons.go:69] Setting ingress-dns=true in profile "addons-606180"
	I1127 23:31:34.254595 1461234 addons.go:231] Setting addon ingress-dns=true in "addons-606180"
	I1127 23:31:34.254640 1461234 host.go:66] Checking if "addons-606180" exists ...
	I1127 23:31:34.255019 1461234 cli_runner.go:164] Run: docker container inspect addons-606180 --format={{.State.Status}}
	I1127 23:31:34.255237 1461234 addons.go:69] Setting cloud-spanner=true in profile "addons-606180"
	I1127 23:31:34.255267 1461234 addons.go:231] Setting addon cloud-spanner=true in "addons-606180"
	I1127 23:31:34.255326 1461234 host.go:66] Checking if "addons-606180" exists ...
	I1127 23:31:34.255831 1461234 cli_runner.go:164] Run: docker container inspect addons-606180 --format={{.State.Status}}
	I1127 23:31:34.257371 1461234 addons.go:69] Setting inspektor-gadget=true in profile "addons-606180"
	I1127 23:31:34.257396 1461234 addons.go:231] Setting addon inspektor-gadget=true in "addons-606180"
	I1127 23:31:34.257441 1461234 host.go:66] Checking if "addons-606180" exists ...
	I1127 23:31:34.257923 1461234 cli_runner.go:164] Run: docker container inspect addons-606180 --format={{.State.Status}}
	I1127 23:31:34.267884 1461234 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-606180"
	I1127 23:31:34.267956 1461234 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-606180"
	I1127 23:31:34.268007 1461234 host.go:66] Checking if "addons-606180" exists ...
	I1127 23:31:34.268501 1461234 cli_runner.go:164] Run: docker container inspect addons-606180 --format={{.State.Status}}
	I1127 23:31:34.281259 1461234 addons.go:69] Setting metrics-server=true in profile "addons-606180"
	I1127 23:31:34.281302 1461234 addons.go:231] Setting addon metrics-server=true in "addons-606180"
	I1127 23:31:34.281351 1461234 host.go:66] Checking if "addons-606180" exists ...
	I1127 23:31:34.281809 1461234 cli_runner.go:164] Run: docker container inspect addons-606180 --format={{.State.Status}}
	I1127 23:31:34.282059 1461234 addons.go:69] Setting default-storageclass=true in profile "addons-606180"
	I1127 23:31:34.282074 1461234 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-606180"
	I1127 23:31:34.282377 1461234 cli_runner.go:164] Run: docker container inspect addons-606180 --format={{.State.Status}}
	I1127 23:31:34.296498 1461234 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-606180"
	I1127 23:31:34.296537 1461234 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-606180"
	I1127 23:31:34.296585 1461234 host.go:66] Checking if "addons-606180" exists ...
	I1127 23:31:34.297096 1461234 cli_runner.go:164] Run: docker container inspect addons-606180 --format={{.State.Status}}
	I1127 23:31:34.304011 1461234 addons.go:69] Setting gcp-auth=true in profile "addons-606180"
	I1127 23:31:34.304091 1461234 mustload.go:65] Loading cluster: addons-606180
	I1127 23:31:34.304319 1461234 config.go:182] Loaded profile config "addons-606180": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:31:34.304601 1461234 cli_runner.go:164] Run: docker container inspect addons-606180 --format={{.State.Status}}
	I1127 23:31:34.308407 1461234 addons.go:69] Setting registry=true in profile "addons-606180"
	I1127 23:31:34.308441 1461234 addons.go:231] Setting addon registry=true in "addons-606180"
	I1127 23:31:34.308494 1461234 host.go:66] Checking if "addons-606180" exists ...
	I1127 23:31:34.308944 1461234 cli_runner.go:164] Run: docker container inspect addons-606180 --format={{.State.Status}}
	I1127 23:31:34.340157 1461234 addons.go:69] Setting storage-provisioner=true in profile "addons-606180"
	I1127 23:31:34.340186 1461234 addons.go:231] Setting addon storage-provisioner=true in "addons-606180"
	I1127 23:31:34.340235 1461234 host.go:66] Checking if "addons-606180" exists ...
	I1127 23:31:34.340668 1461234 cli_runner.go:164] Run: docker container inspect addons-606180 --format={{.State.Status}}
	I1127 23:31:34.361837 1461234 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-606180"
	I1127 23:31:34.361876 1461234 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-606180"
	I1127 23:31:34.362291 1461234 cli_runner.go:164] Run: docker container inspect addons-606180 --format={{.State.Status}}
	I1127 23:31:34.389094 1461234 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1127 23:31:34.391079 1461234 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1127 23:31:34.391116 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1127 23:31:34.391200 1461234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606180
	I1127 23:31:34.437728 1461234 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1127 23:31:34.469721 1461234 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1127 23:31:34.469767 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1127 23:31:34.469890 1461234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606180
	I1127 23:31:34.490911 1461234 addons.go:231] Setting addon default-storageclass=true in "addons-606180"
	I1127 23:31:34.490955 1461234 host.go:66] Checking if "addons-606180" exists ...
	I1127 23:31:34.491569 1461234 cli_runner.go:164] Run: docker container inspect addons-606180 --format={{.State.Status}}
	I1127 23:31:34.526071 1461234 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1127 23:31:34.528557 1461234 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1127 23:31:34.528628 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1127 23:31:34.528751 1461234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606180
	I1127 23:31:34.587716 1461234 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1127 23:31:34.589182 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1127 23:31:34.589219 1461234 host.go:66] Checking if "addons-606180" exists ...
	I1127 23:31:34.601185 1461234 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1127 23:31:34.605053 1461234 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1127 23:31:34.605081 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1127 23:31:34.605149 1461234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606180
	I1127 23:31:34.622840 1461234 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:31:34.615094 1461234 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.22.0
	I1127 23:31:34.622047 1461234 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1127 23:31:34.622058 1461234 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1127 23:31:34.627682 1461234 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 23:31:34.627691 1461234 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1127 23:31:34.628002 1461234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34069 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/addons-606180/id_rsa Username:docker}
	I1127 23:31:34.630928 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1127 23:31:34.632599 1461234 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1127 23:31:34.637687 1461234 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1127 23:31:34.637762 1461234 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1127 23:31:34.637914 1461234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606180
	I1127 23:31:34.642104 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1127 23:31:34.642043 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1127 23:31:34.643649 1461234 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1127 23:31:34.648314 1461234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606180
	I1127 23:31:34.650255 1461234 out.go:177]   - Using image docker.io/registry:2.8.3
	I1127 23:31:34.650346 1461234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606180
	I1127 23:31:34.654479 1461234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34069 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/addons-606180/id_rsa Username:docker}
	I1127 23:31:34.658446 1461234 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1127 23:31:34.659572 1461234 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1127 23:31:34.675075 1461234 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1127 23:31:34.675084 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1127 23:31:34.675158 1461234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606180
	I1127 23:31:34.678129 1461234 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1127 23:31:34.674953 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1127 23:31:34.683937 1461234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606180
	I1127 23:31:34.697807 1461234 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1127 23:31:34.697836 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1127 23:31:34.698122 1461234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606180
	I1127 23:31:34.701050 1461234 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-606180" context rescaled to 1 replicas
	I1127 23:31:34.701101 1461234 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1127 23:31:34.704768 1461234 out.go:177] * Verifying Kubernetes components...
	I1127 23:31:34.706807 1461234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:31:34.706971 1461234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34069 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/addons-606180/id_rsa Username:docker}
	I1127 23:31:34.711807 1461234 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1127 23:31:34.714109 1461234 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1127 23:31:34.717229 1461234 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1127 23:31:34.720844 1461234 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1127 23:31:34.724729 1461234 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1127 23:31:34.723211 1461234 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-606180"
	I1127 23:31:34.729195 1461234 host.go:66] Checking if "addons-606180" exists ...
	I1127 23:31:34.729872 1461234 cli_runner.go:164] Run: docker container inspect addons-606180 --format={{.State.Status}}
	I1127 23:31:34.730167 1461234 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1127 23:31:34.730184 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1127 23:31:34.730237 1461234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606180
	I1127 23:31:34.809991 1461234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34069 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/addons-606180/id_rsa Username:docker}
	I1127 23:31:34.824572 1461234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34069 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/addons-606180/id_rsa Username:docker}
	I1127 23:31:34.860783 1461234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34069 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/addons-606180/id_rsa Username:docker}
	I1127 23:31:34.886131 1461234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34069 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/addons-606180/id_rsa Username:docker}
	I1127 23:31:34.927445 1461234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34069 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/addons-606180/id_rsa Username:docker}
	I1127 23:31:34.936184 1461234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34069 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/addons-606180/id_rsa Username:docker}
	I1127 23:31:34.949580 1461234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34069 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/addons-606180/id_rsa Username:docker}
	I1127 23:31:34.962565 1461234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34069 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/addons-606180/id_rsa Username:docker}
	I1127 23:31:34.973532 1461234 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1127 23:31:34.975443 1461234 out.go:177]   - Using image docker.io/busybox:stable
	I1127 23:31:34.977302 1461234 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1127 23:31:34.977334 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1127 23:31:34.977414 1461234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606180
	I1127 23:31:35.020683 1461234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34069 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/addons-606180/id_rsa Username:docker}
	I1127 23:31:35.142848 1461234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1127 23:31:35.314470 1461234 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1127 23:31:35.314505 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1127 23:31:35.392246 1461234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1127 23:31:35.403713 1461234 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1127 23:31:35.403739 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1127 23:31:35.414756 1461234 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1127 23:31:35.414788 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1127 23:31:35.420049 1461234 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1127 23:31:35.420120 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1127 23:31:35.424466 1461234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1127 23:31:35.438944 1461234 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1127 23:31:35.438968 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1127 23:31:35.476184 1461234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1127 23:31:35.486829 1461234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1127 23:31:35.546644 1461234 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1127 23:31:35.546728 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1127 23:31:35.547527 1461234 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1127 23:31:35.547581 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1127 23:31:35.598266 1461234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1127 23:31:35.621013 1461234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 23:31:35.623743 1461234 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1127 23:31:35.623807 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1127 23:31:35.663576 1461234 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1127 23:31:35.663644 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1127 23:31:35.717582 1461234 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1127 23:31:35.717663 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1127 23:31:35.720979 1461234 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1127 23:31:35.721044 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1127 23:31:35.735752 1461234 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1127 23:31:35.735827 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1127 23:31:35.812121 1461234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1127 23:31:35.869260 1461234 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1127 23:31:35.869335 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1127 23:31:35.900777 1461234 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1127 23:31:35.900848 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1127 23:31:35.923241 1461234 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1127 23:31:35.923312 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1127 23:31:35.928450 1461234 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1127 23:31:35.928520 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1127 23:31:36.093654 1461234 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1127 23:31:36.093732 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1127 23:31:36.102995 1461234 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1127 23:31:36.103061 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1127 23:31:36.152588 1461234 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1127 23:31:36.152679 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1127 23:31:36.174166 1461234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1127 23:31:36.296951 1461234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1127 23:31:36.327380 1461234 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1127 23:31:36.327447 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1127 23:31:36.340776 1461234 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1127 23:31:36.340844 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1127 23:31:36.434577 1461234 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1127 23:31:36.434640 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1127 23:31:36.463716 1461234 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1127 23:31:36.463788 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1127 23:31:36.567521 1461234 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1127 23:31:36.567593 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1127 23:31:36.621816 1461234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1127 23:31:36.653128 1461234 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1127 23:31:36.653199 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1127 23:31:36.715579 1461234 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1127 23:31:36.715656 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1127 23:31:36.879068 1461234 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1127 23:31:36.879134 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1127 23:31:36.990615 1461234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1127 23:31:37.062706 1461234 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.440625034s)
	I1127 23:31:37.062735 1461234 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1127 23:31:37.062797 1461234 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.355965422s)
	I1127 23:31:37.063581 1461234 node_ready.go:35] waiting up to 6m0s for node "addons-606180" to be "Ready" ...
	I1127 23:31:38.859018 1461234 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.71613283s)
	I1127 23:31:38.859102 1461234 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.46683078s)
	I1127 23:31:38.859187 1461234 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.434652921s)
	I1127 23:31:38.893265 1461234 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.417030874s)
	I1127 23:31:39.411025 1461234 node_ready.go:58] node "addons-606180" has status "Ready":"False"
	I1127 23:31:40.454843 1461234 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.967981561s)
	I1127 23:31:40.454880 1461234 addons.go:467] Verifying addon ingress=true in "addons-606180"
	I1127 23:31:40.454903 1461234 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.856476057s)
	I1127 23:31:40.458143 1461234 out.go:177] * Verifying ingress addon...
	I1127 23:31:40.455234 1461234 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.834149337s)
	I1127 23:31:40.455262 1461234 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.643069502s)
	I1127 23:31:40.455344 1461234 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.281112871s)
	I1127 23:31:40.455492 1461234 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.158456671s)
	I1127 23:31:40.455555 1461234 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.833633219s)
	I1127 23:31:40.460772 1461234 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1127 23:31:40.461149 1461234 addons.go:467] Verifying addon metrics-server=true in "addons-606180"
	I1127 23:31:40.461162 1461234 addons.go:467] Verifying addon registry=true in "addons-606180"
	I1127 23:31:40.463054 1461234 out.go:177] * Verifying registry addon...
	W1127 23:31:40.461189 1461234 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1127 23:31:40.463095 1461234 retry.go:31] will retry after 178.914262ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1127 23:31:40.466019 1461234 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1127 23:31:40.506641 1461234 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1127 23:31:40.506673 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:40.512707 1461234 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1127 23:31:40.512740 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:40.523376 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:40.524502 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:40.645288 1461234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1127 23:31:40.848825 1461234 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.858110164s)
	I1127 23:31:40.848904 1461234 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-606180"
	I1127 23:31:40.850769 1461234 out.go:177] * Verifying csi-hostpath-driver addon...
	I1127 23:31:40.853677 1461234 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1127 23:31:40.876155 1461234 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1127 23:31:40.876179 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:40.881156 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:41.031527 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:41.033092 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:41.386785 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:41.531913 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:41.532407 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:41.685917 1461234 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.040583515s)
	I1127 23:31:41.759329 1461234 node_ready.go:58] node "addons-606180" has status "Ready":"False"
	I1127 23:31:41.885720 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:42.028852 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:42.031863 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:42.047513 1461234 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1127 23:31:42.047593 1461234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606180
	I1127 23:31:42.072504 1461234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34069 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/addons-606180/id_rsa Username:docker}
	I1127 23:31:42.252143 1461234 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1127 23:31:42.302418 1461234 addons.go:231] Setting addon gcp-auth=true in "addons-606180"
	I1127 23:31:42.302474 1461234 host.go:66] Checking if "addons-606180" exists ...
	I1127 23:31:42.302951 1461234 cli_runner.go:164] Run: docker container inspect addons-606180 --format={{.State.Status}}
	I1127 23:31:42.327906 1461234 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1127 23:31:42.327989 1461234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-606180
	I1127 23:31:42.358210 1461234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34069 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/addons-606180/id_rsa Username:docker}
	I1127 23:31:42.387776 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:42.457235 1461234 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1127 23:31:42.459270 1461234 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1127 23:31:42.461326 1461234 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1127 23:31:42.461347 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1127 23:31:42.484388 1461234 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1127 23:31:42.484415 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1127 23:31:42.508158 1461234 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1127 23:31:42.508182 1461234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1127 23:31:42.530801 1461234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1127 23:31:42.532151 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:42.533292 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:42.886277 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:43.033091 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:43.033494 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:43.264671 1461234 addons.go:467] Verifying addon gcp-auth=true in "addons-606180"
	I1127 23:31:43.268329 1461234 out.go:177] * Verifying gcp-auth addon...
	I1127 23:31:43.270811 1461234 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1127 23:31:43.299009 1461234 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1127 23:31:43.299032 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:43.309746 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:43.386519 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:43.533130 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:43.534428 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:43.760723 1461234 node_ready.go:58] node "addons-606180" has status "Ready":"False"
	I1127 23:31:43.817439 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:43.887809 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:44.032233 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:44.034911 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:44.314193 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:44.385998 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:44.530940 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:44.542899 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:44.813920 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:44.886854 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:45.047716 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:45.071886 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:45.315435 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:45.394808 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:45.530031 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:45.530890 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:45.814848 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:45.886283 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:46.031768 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:46.039227 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:46.259718 1461234 node_ready.go:58] node "addons-606180" has status "Ready":"False"
	I1127 23:31:46.313592 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:46.386523 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:46.531688 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:46.533172 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:46.813752 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:46.890277 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:47.030813 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:47.032298 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:47.314463 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:47.387116 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:47.536874 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:47.538291 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:47.815874 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:47.886883 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:48.030257 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:48.031851 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:48.260073 1461234 node_ready.go:58] node "addons-606180" has status "Ready":"False"
	I1127 23:31:48.314397 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:48.387314 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:48.528107 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:48.529169 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:48.816000 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:48.886107 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:49.028506 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:49.029920 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:49.313486 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:49.385982 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:49.528806 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:49.529596 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:49.813882 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:49.886222 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:50.028991 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:50.030392 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:50.313880 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:50.385589 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:50.527707 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:50.528849 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:50.759227 1461234 node_ready.go:58] node "addons-606180" has status "Ready":"False"
	I1127 23:31:50.813935 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:50.886029 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:51.028245 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:51.030084 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:51.313662 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:51.386074 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:51.527935 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:51.528534 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:51.813385 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:51.885696 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:52.029288 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:52.029877 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:52.314099 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:52.386479 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:52.528089 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:52.529138 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:52.759701 1461234 node_ready.go:58] node "addons-606180" has status "Ready":"False"
	I1127 23:31:52.814637 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:52.885695 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:53.028502 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:53.029434 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:53.313756 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:53.386107 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:53.528641 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:53.530335 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:53.813164 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:53.885382 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:54.035058 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:54.036453 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:54.313691 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:54.386328 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:54.528349 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:54.529185 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:54.760096 1461234 node_ready.go:58] node "addons-606180" has status "Ready":"False"
	I1127 23:31:54.813640 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:54.886077 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:55.029707 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:55.030362 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:55.313131 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:55.386498 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:55.528179 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:55.529494 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:55.814096 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:55.886204 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:56.027874 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:56.029266 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:56.313549 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:56.386293 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:56.528794 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:56.529508 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:56.813417 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:56.885715 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:57.028314 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:57.030267 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:57.260238 1461234 node_ready.go:58] node "addons-606180" has status "Ready":"False"
	I1127 23:31:57.314069 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:57.386027 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:57.527578 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:57.529344 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:57.813994 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:57.886123 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:58.030734 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:58.031033 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:58.314294 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:58.386854 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:58.527727 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:58.530710 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:58.813385 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:58.886238 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:59.027907 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:59.029391 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:59.313427 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:59.385736 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:31:59.528364 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:31:59.530314 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:31:59.759866 1461234 node_ready.go:58] node "addons-606180" has status "Ready":"False"
	I1127 23:31:59.814639 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:31:59.885847 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:00.093197 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:00.094841 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:00.319455 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:00.397139 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:00.529583 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:00.530328 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:00.814106 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:00.885394 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:01.027975 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:01.029024 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:01.313218 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:01.385494 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:01.527931 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:01.530428 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:01.760160 1461234 node_ready.go:58] node "addons-606180" has status "Ready":"False"
	I1127 23:32:01.814410 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:01.885651 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:02.028581 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:02.029449 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:02.314098 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:02.385608 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:02.527768 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:02.529520 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:02.814054 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:02.886146 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:03.028236 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:03.029531 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:03.313500 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:03.385809 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:03.528139 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:03.530098 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:03.813409 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:03.885633 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:04.030436 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:04.032020 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:04.259973 1461234 node_ready.go:58] node "addons-606180" has status "Ready":"False"
	I1127 23:32:04.313265 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:04.385683 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:04.527762 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:04.529748 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:04.813666 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:04.886410 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:05.029520 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:05.030333 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:05.313455 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:05.385757 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:05.527775 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:05.529037 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:05.813463 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:05.886333 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:06.029188 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:06.029965 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:06.313329 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:06.386429 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:06.528025 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:06.529058 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:06.759601 1461234 node_ready.go:58] node "addons-606180" has status "Ready":"False"
	I1127 23:32:06.813485 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:06.885815 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:07.028984 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:07.029792 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:07.303806 1461234 node_ready.go:49] node "addons-606180" has status "Ready":"True"
	I1127 23:32:07.303839 1461234 node_ready.go:38] duration metric: took 30.240218857s waiting for node "addons-606180" to be "Ready" ...
	I1127 23:32:07.303850 1461234 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:32:07.328173 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:07.336561 1461234 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-l4fd2" in "kube-system" namespace to be "Ready" ...
	I1127 23:32:07.393345 1461234 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1127 23:32:07.393372 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:07.566726 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:07.594463 1461234 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1127 23:32:07.594496 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:07.836910 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:07.944523 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:08.122000 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:08.124265 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:08.315207 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:08.391655 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:08.531134 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:08.531800 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:08.814260 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:08.888413 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:09.031163 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:09.039511 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:09.317922 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:09.392997 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:09.410647 1461234 pod_ready.go:92] pod "coredns-5dd5756b68-l4fd2" in "kube-system" namespace has status "Ready":"True"
	I1127 23:32:09.410673 1461234 pod_ready.go:81] duration metric: took 2.074070896s waiting for pod "coredns-5dd5756b68-l4fd2" in "kube-system" namespace to be "Ready" ...
	I1127 23:32:09.410697 1461234 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-606180" in "kube-system" namespace to be "Ready" ...
	I1127 23:32:09.417874 1461234 pod_ready.go:92] pod "etcd-addons-606180" in "kube-system" namespace has status "Ready":"True"
	I1127 23:32:09.417899 1461234 pod_ready.go:81] duration metric: took 7.194496ms waiting for pod "etcd-addons-606180" in "kube-system" namespace to be "Ready" ...
	I1127 23:32:09.417915 1461234 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-606180" in "kube-system" namespace to be "Ready" ...
	I1127 23:32:09.424829 1461234 pod_ready.go:92] pod "kube-apiserver-addons-606180" in "kube-system" namespace has status "Ready":"True"
	I1127 23:32:09.424901 1461234 pod_ready.go:81] duration metric: took 6.976168ms waiting for pod "kube-apiserver-addons-606180" in "kube-system" namespace to be "Ready" ...
	I1127 23:32:09.424939 1461234 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-606180" in "kube-system" namespace to be "Ready" ...
	I1127 23:32:09.436026 1461234 pod_ready.go:92] pod "kube-controller-manager-addons-606180" in "kube-system" namespace has status "Ready":"True"
	I1127 23:32:09.436054 1461234 pod_ready.go:81] duration metric: took 11.077129ms waiting for pod "kube-controller-manager-addons-606180" in "kube-system" namespace to be "Ready" ...
	I1127 23:32:09.436070 1461234 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jp57p" in "kube-system" namespace to be "Ready" ...
	I1127 23:32:09.531903 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:09.533102 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:09.660459 1461234 pod_ready.go:92] pod "kube-proxy-jp57p" in "kube-system" namespace has status "Ready":"True"
	I1127 23:32:09.660486 1461234 pod_ready.go:81] duration metric: took 224.40759ms waiting for pod "kube-proxy-jp57p" in "kube-system" namespace to be "Ready" ...
	I1127 23:32:09.660498 1461234 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-606180" in "kube-system" namespace to be "Ready" ...
	I1127 23:32:09.816165 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:09.887912 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:10.033158 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:10.036558 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:10.060512 1461234 pod_ready.go:92] pod "kube-scheduler-addons-606180" in "kube-system" namespace has status "Ready":"True"
	I1127 23:32:10.060541 1461234 pod_ready.go:81] duration metric: took 400.03408ms waiting for pod "kube-scheduler-addons-606180" in "kube-system" namespace to be "Ready" ...
	I1127 23:32:10.060555 1461234 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-958nm" in "kube-system" namespace to be "Ready" ...
	I1127 23:32:10.313439 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:10.387978 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:10.528136 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:10.530715 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:10.815459 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:10.890881 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:11.028945 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:11.031781 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:11.313952 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:11.387895 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:11.531639 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:11.536408 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:11.814697 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:11.887558 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:12.038187 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:12.048023 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:12.315028 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:12.367621 1461234 pod_ready.go:102] pod "metrics-server-7c66d45ddc-958nm" in "kube-system" namespace has status "Ready":"False"
	I1127 23:32:12.399215 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:12.542418 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:12.543488 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:12.813471 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:12.888196 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:13.028968 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:13.031059 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:13.320154 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:13.387817 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:13.533329 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:13.534573 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:13.815681 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:13.888900 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:14.030796 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:14.033838 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:14.328465 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:14.369751 1461234 pod_ready.go:102] pod "metrics-server-7c66d45ddc-958nm" in "kube-system" namespace has status "Ready":"False"
	I1127 23:32:14.387559 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:14.528828 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:14.529740 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:14.813441 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:14.888422 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:15.057784 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:15.059376 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:15.314203 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:15.389763 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:15.531242 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:15.535567 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:15.813906 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:15.902530 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:16.030295 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:16.031588 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:16.313025 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:16.387832 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:16.528963 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:16.529819 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:16.814190 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:16.866791 1461234 pod_ready.go:102] pod "metrics-server-7c66d45ddc-958nm" in "kube-system" namespace has status "Ready":"False"
	I1127 23:32:16.887004 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:17.029542 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:17.030943 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:17.319219 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:17.388663 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:17.532264 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:17.533504 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:17.813497 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:17.887422 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:18.032579 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:18.033799 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:18.315795 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:18.387505 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:18.533577 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:18.534824 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:18.824909 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:18.867168 1461234 pod_ready.go:102] pod "metrics-server-7c66d45ddc-958nm" in "kube-system" namespace has status "Ready":"False"
	I1127 23:32:18.889495 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:19.029899 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:19.031942 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:19.314184 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:19.392007 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:19.531695 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:19.532121 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:19.816138 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:19.887570 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:20.059356 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:20.061840 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:20.314367 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:20.387554 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:20.531101 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:20.532361 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:20.814582 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:20.873789 1461234 pod_ready.go:102] pod "metrics-server-7c66d45ddc-958nm" in "kube-system" namespace has status "Ready":"False"
	I1127 23:32:20.888183 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:21.032147 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:21.034400 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:21.313916 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:21.388713 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:21.529976 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:21.534844 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:21.821886 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:21.888015 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:22.031831 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:22.032427 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:22.316321 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:22.395612 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:22.530943 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:22.535578 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:22.813366 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:22.887760 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:23.029359 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:23.030332 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:23.313633 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:23.366586 1461234 pod_ready.go:102] pod "metrics-server-7c66d45ddc-958nm" in "kube-system" namespace has status "Ready":"False"
	I1127 23:32:23.389312 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:23.537955 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:23.539356 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:23.816722 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:23.888111 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:24.030051 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:24.031203 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:24.316224 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:24.387305 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:24.535107 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:24.536440 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:24.819115 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:24.914353 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:25.030246 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:25.032038 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:25.317465 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:25.371240 1461234 pod_ready.go:102] pod "metrics-server-7c66d45ddc-958nm" in "kube-system" namespace has status "Ready":"False"
	I1127 23:32:25.388020 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:25.536826 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:25.543822 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:25.819311 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:25.888323 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:26.032862 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:26.033622 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:26.314832 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:26.387804 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:26.532687 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:26.535123 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:26.826891 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:26.892123 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:27.046815 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:27.047442 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:27.314782 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:27.370279 1461234 pod_ready.go:92] pod "metrics-server-7c66d45ddc-958nm" in "kube-system" namespace has status "Ready":"True"
	I1127 23:32:27.370311 1461234 pod_ready.go:81] duration metric: took 17.30974199s waiting for pod "metrics-server-7c66d45ddc-958nm" in "kube-system" namespace to be "Ready" ...
	I1127 23:32:27.370325 1461234 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-g52cs" in "kube-system" namespace to be "Ready" ...
	I1127 23:32:27.418104 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:27.533291 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:27.534237 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:27.814326 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:27.889922 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:28.047065 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:28.048674 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:28.313460 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:28.387781 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:28.532038 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:28.532545 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:28.814045 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:28.887158 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:29.029438 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:29.030934 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:29.314168 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:29.400282 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:29.432077 1461234 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-g52cs" in "kube-system" namespace has status "Ready":"False"
	I1127 23:32:29.534603 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:29.535668 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:29.819771 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:29.888127 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:30.063655 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:30.064759 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:30.313438 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:30.387677 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:30.529649 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:30.531098 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:30.813127 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:30.886626 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:31.029316 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:31.031101 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:31.314244 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:31.387106 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:31.529087 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:31.530255 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:31.814248 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:31.887562 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:31.930830 1461234 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-g52cs" in "kube-system" namespace has status "Ready":"False"
	I1127 23:32:32.029633 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:32.029904 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:32.318309 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:32.393203 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:32.546282 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:32.548674 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:32.814889 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:32.889542 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:33.032102 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:33.033415 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:33.315081 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:33.390338 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:33.530147 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:33.540538 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:33.813900 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:33.888755 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:33.932918 1461234 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-g52cs" in "kube-system" namespace has status "Ready":"False"
	I1127 23:32:34.033422 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:34.035383 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:34.315371 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:34.388483 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:34.536730 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:34.539346 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:34.814221 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:34.899673 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:35.029455 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:35.033908 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:35.314228 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:35.387837 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:35.534514 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:35.538076 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:35.814156 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:35.888754 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:36.028887 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:36.031180 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:36.314080 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:36.386654 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:36.429677 1461234 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-g52cs" in "kube-system" namespace has status "Ready":"False"
	I1127 23:32:36.529077 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:36.530595 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:36.813826 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:36.887470 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:36.929674 1461234 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-g52cs" in "kube-system" namespace has status "Ready":"True"
	I1127 23:32:36.929694 1461234 pod_ready.go:81] duration metric: took 9.559361595s waiting for pod "nvidia-device-plugin-daemonset-g52cs" in "kube-system" namespace to be "Ready" ...
	I1127 23:32:36.929737 1461234 pod_ready.go:38] duration metric: took 29.625874478s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:32:36.929761 1461234 api_server.go:52] waiting for apiserver process to appear ...
	I1127 23:32:36.929788 1461234 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1127 23:32:36.929891 1461234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1127 23:32:36.980193 1461234 cri.go:89] found id: "4acde5ffc6f5e58cbd575d400616f38f441a2f39dcb5612e040722a56fef7614"
	I1127 23:32:36.980217 1461234 cri.go:89] found id: ""
	I1127 23:32:36.980226 1461234 logs.go:284] 1 containers: [4acde5ffc6f5e58cbd575d400616f38f441a2f39dcb5612e040722a56fef7614]
	I1127 23:32:36.980280 1461234 ssh_runner.go:195] Run: which crictl
	I1127 23:32:36.988021 1461234 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1127 23:32:36.988088 1461234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1127 23:32:37.033960 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:37.037893 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:37.074296 1461234 cri.go:89] found id: "9eec496f1ec73e7e42c9ef2866cf724838c0a2f79226dd31ae75423fb329ed9d"
	I1127 23:32:37.074367 1461234 cri.go:89] found id: ""
	I1127 23:32:37.074389 1461234 logs.go:284] 1 containers: [9eec496f1ec73e7e42c9ef2866cf724838c0a2f79226dd31ae75423fb329ed9d]
	I1127 23:32:37.074472 1461234 ssh_runner.go:195] Run: which crictl
	I1127 23:32:37.081819 1461234 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1127 23:32:37.081976 1461234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1127 23:32:37.172719 1461234 cri.go:89] found id: "625e9810ce5fdc84ea9f1dfb08c75ad3dfb50c11fbe1cbe3199bf62104b2cc8a"
	I1127 23:32:37.172789 1461234 cri.go:89] found id: ""
	I1127 23:32:37.172828 1461234 logs.go:284] 1 containers: [625e9810ce5fdc84ea9f1dfb08c75ad3dfb50c11fbe1cbe3199bf62104b2cc8a]
	I1127 23:32:37.172922 1461234 ssh_runner.go:195] Run: which crictl
	I1127 23:32:37.191990 1461234 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1127 23:32:37.192140 1461234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1127 23:32:37.315697 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:37.389111 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:37.504073 1461234 cri.go:89] found id: "9eaa4515394b6211693689b3b71e75375a645c2dc8cf58ae0c34dffe92e8a53e"
	I1127 23:32:37.504145 1461234 cri.go:89] found id: ""
	I1127 23:32:37.504167 1461234 logs.go:284] 1 containers: [9eaa4515394b6211693689b3b71e75375a645c2dc8cf58ae0c34dffe92e8a53e]
	I1127 23:32:37.504258 1461234 ssh_runner.go:195] Run: which crictl
	I1127 23:32:37.518968 1461234 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1127 23:32:37.519090 1461234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1127 23:32:37.539461 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:37.551612 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:37.691690 1461234 cri.go:89] found id: "cf56f8af181f7667845cf630a0931a8b5ec982efd7680d6286c08f39bed711a0"
	I1127 23:32:37.691764 1461234 cri.go:89] found id: ""
	I1127 23:32:37.691786 1461234 logs.go:284] 1 containers: [cf56f8af181f7667845cf630a0931a8b5ec982efd7680d6286c08f39bed711a0]
	I1127 23:32:37.691871 1461234 ssh_runner.go:195] Run: which crictl
	I1127 23:32:37.726494 1461234 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1127 23:32:37.726620 1461234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1127 23:32:37.831020 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:37.874228 1461234 cri.go:89] found id: "1190f5ab8649a19420911be3fdff74aa915e240d2918bb980cffb6f2bd6120a0"
	I1127 23:32:37.874300 1461234 cri.go:89] found id: ""
	I1127 23:32:37.874335 1461234 logs.go:284] 1 containers: [1190f5ab8649a19420911be3fdff74aa915e240d2918bb980cffb6f2bd6120a0]
	I1127 23:32:37.874434 1461234 ssh_runner.go:195] Run: which crictl
	I1127 23:32:37.888615 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:37.900151 1461234 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1127 23:32:37.900287 1461234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1127 23:32:38.038278 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:38.038627 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:38.079085 1461234 cri.go:89] found id: "4df224723a061c4322e74c4a6fe0c42b5f6bde92eff76afe7a08b5a4fdc29c87"
	I1127 23:32:38.079167 1461234 cri.go:89] found id: ""
	I1127 23:32:38.079190 1461234 logs.go:284] 1 containers: [4df224723a061c4322e74c4a6fe0c42b5f6bde92eff76afe7a08b5a4fdc29c87]
	I1127 23:32:38.079325 1461234 ssh_runner.go:195] Run: which crictl
	I1127 23:32:38.087003 1461234 logs.go:123] Gathering logs for CRI-O ...
	I1127 23:32:38.087091 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1127 23:32:38.193286 1461234 logs.go:123] Gathering logs for container status ...
	I1127 23:32:38.193335 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1127 23:32:38.260016 1461234 logs.go:123] Gathering logs for kubelet ...
	I1127 23:32:38.260046 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1127 23:32:38.313468 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1127 23:32:38.325945 1461234 logs.go:138] Found kubelet problem: Nov 27 23:31:33 addons-606180 kubelet[1349]: W1127 23:31:33.296460    1349 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-606180' and this object
	W1127 23:32:38.326200 1461234 logs.go:138] Found kubelet problem: Nov 27 23:31:33 addons-606180 kubelet[1349]: E1127 23:31:33.296500    1349 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-606180' and this object
	W1127 23:32:38.326408 1461234 logs.go:138] Found kubelet problem: Nov 27 23:31:33 addons-606180 kubelet[1349]: W1127 23:31:33.299356    1349 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-606180' and this object
	W1127 23:32:38.326632 1461234 logs.go:138] Found kubelet problem: Nov 27 23:31:33 addons-606180 kubelet[1349]: E1127 23:31:33.299393    1349 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-606180' and this object
	W1127 23:32:38.329954 1461234 logs.go:138] Found kubelet problem: Nov 27 23:31:40 addons-606180 kubelet[1349]: W1127 23:31:40.102404    1349 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-606180' and this object
	W1127 23:32:38.330185 1461234 logs.go:138] Found kubelet problem: Nov 27 23:31:40 addons-606180 kubelet[1349]: E1127 23:31:40.102446    1349 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-606180' and this object
	W1127 23:32:38.337648 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: W1127 23:32:07.200155    1349 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-606180" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-606180' and this object
	W1127 23:32:38.337903 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: E1127 23:32:07.200180    1349 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-606180" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-606180' and this object
	W1127 23:32:38.338109 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: W1127 23:32:07.200226    1349 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-606180' and this object
	W1127 23:32:38.338333 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: E1127 23:32:07.200239    1349 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-606180' and this object
	W1127 23:32:38.338536 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: W1127 23:32:07.200276    1349 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-606180' and this object
	W1127 23:32:38.338852 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: E1127 23:32:07.200287    1349 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-606180' and this object
	W1127 23:32:38.339068 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: W1127 23:32:07.200319    1349 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-606180" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-606180' and this object
	W1127 23:32:38.339295 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: E1127 23:32:07.200328    1349 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-606180" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-606180' and this object
	W1127 23:32:38.340534 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: W1127 23:32:07.227825    1349 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-606180' and this object
	W1127 23:32:38.340762 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: E1127 23:32:07.227867    1349 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-606180' and this object
	I1127 23:32:38.374261 1461234 logs.go:123] Gathering logs for dmesg ...
	I1127 23:32:38.374343 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1127 23:32:38.388709 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:38.419952 1461234 logs.go:123] Gathering logs for describe nodes ...
	I1127 23:32:38.419988 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1127 23:32:38.544079 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:38.545052 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:38.704552 1461234 logs.go:123] Gathering logs for etcd [9eec496f1ec73e7e42c9ef2866cf724838c0a2f79226dd31ae75423fb329ed9d] ...
	I1127 23:32:38.704881 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9eec496f1ec73e7e42c9ef2866cf724838c0a2f79226dd31ae75423fb329ed9d"
	I1127 23:32:38.809819 1461234 logs.go:123] Gathering logs for kube-controller-manager [1190f5ab8649a19420911be3fdff74aa915e240d2918bb980cffb6f2bd6120a0] ...
	I1127 23:32:38.810337 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1190f5ab8649a19420911be3fdff74aa915e240d2918bb980cffb6f2bd6120a0"
	I1127 23:32:38.817582 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:38.889012 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:38.924687 1461234 logs.go:123] Gathering logs for kindnet [4df224723a061c4322e74c4a6fe0c42b5f6bde92eff76afe7a08b5a4fdc29c87] ...
	I1127 23:32:38.924836 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4df224723a061c4322e74c4a6fe0c42b5f6bde92eff76afe7a08b5a4fdc29c87"
	I1127 23:32:39.022464 1461234 logs.go:123] Gathering logs for kube-apiserver [4acde5ffc6f5e58cbd575d400616f38f441a2f39dcb5612e040722a56fef7614] ...
	I1127 23:32:39.022494 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4acde5ffc6f5e58cbd575d400616f38f441a2f39dcb5612e040722a56fef7614"
	I1127 23:32:39.035382 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:39.036348 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:39.115062 1461234 logs.go:123] Gathering logs for coredns [625e9810ce5fdc84ea9f1dfb08c75ad3dfb50c11fbe1cbe3199bf62104b2cc8a] ...
	I1127 23:32:39.115147 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 625e9810ce5fdc84ea9f1dfb08c75ad3dfb50c11fbe1cbe3199bf62104b2cc8a"
	I1127 23:32:39.208034 1461234 logs.go:123] Gathering logs for kube-scheduler [9eaa4515394b6211693689b3b71e75375a645c2dc8cf58ae0c34dffe92e8a53e] ...
	I1127 23:32:39.208115 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9eaa4515394b6211693689b3b71e75375a645c2dc8cf58ae0c34dffe92e8a53e"
	I1127 23:32:39.312144 1461234 logs.go:123] Gathering logs for kube-proxy [cf56f8af181f7667845cf630a0931a8b5ec982efd7680d6286c08f39bed711a0] ...
	I1127 23:32:39.312219 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf56f8af181f7667845cf630a0931a8b5ec982efd7680d6286c08f39bed711a0"
	I1127 23:32:39.314350 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:39.398263 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:39.426618 1461234 out.go:309] Setting ErrFile to fd 2...
	I1127 23:32:39.426644 1461234 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1127 23:32:39.426690 1461234 out.go:239] X Problems detected in kubelet:
	W1127 23:32:39.426704 1461234 out.go:239]   Nov 27 23:32:07 addons-606180 kubelet[1349]: E1127 23:32:07.200287    1349 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-606180' and this object
	W1127 23:32:39.426712 1461234 out.go:239]   Nov 27 23:32:07 addons-606180 kubelet[1349]: W1127 23:32:07.200319    1349 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-606180" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-606180' and this object
	W1127 23:32:39.426724 1461234 out.go:239]   Nov 27 23:32:07 addons-606180 kubelet[1349]: E1127 23:32:07.200328    1349 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-606180" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-606180' and this object
	W1127 23:32:39.426733 1461234 out.go:239]   Nov 27 23:32:07 addons-606180 kubelet[1349]: W1127 23:32:07.227825    1349 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-606180' and this object
	W1127 23:32:39.426739 1461234 out.go:239]   Nov 27 23:32:07 addons-606180 kubelet[1349]: E1127 23:32:07.227867    1349 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-606180' and this object
	I1127 23:32:39.426752 1461234 out.go:309] Setting ErrFile to fd 2...
	I1127 23:32:39.426758 1461234 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:32:39.530476 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:39.531607 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:39.814053 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:39.888000 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:40.034713 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:40.035256 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:40.313655 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:40.392866 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:40.528796 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:40.530406 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:40.813942 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:40.888293 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:41.043618 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:41.044964 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:41.313894 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:41.386743 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:41.533412 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:41.534881 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:32:41.813970 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:41.893240 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:42.032974 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:42.037125 1461234 kapi.go:107] duration metric: took 1m1.571090729s to wait for kubernetes.io/minikube-addons=registry ...
	I1127 23:32:42.315337 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:42.389712 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:42.530115 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:42.814116 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:42.887737 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:43.029640 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:43.313247 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:43.394384 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:43.530845 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:43.828757 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:43.924427 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:44.044557 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:44.315182 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:44.388344 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:44.531682 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:44.814190 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:44.897300 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:45.047225 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:45.317724 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:45.388036 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:45.531083 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:45.814931 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:45.897683 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:46.030021 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:46.314861 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:46.386910 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:46.529089 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:46.813657 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:46.886816 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:47.029408 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:47.314542 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:47.391221 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:47.529792 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:47.813727 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:47.887525 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:48.030400 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:48.313302 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:48.387220 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:48.538088 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:48.833214 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:48.887194 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:49.034323 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:49.314208 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:49.391727 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:49.427639 1461234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 23:32:49.450178 1461234 api_server.go:72] duration metric: took 1m14.74904638s to wait for apiserver process to appear ...
	I1127 23:32:49.450258 1461234 api_server.go:88] waiting for apiserver healthz status ...
	I1127 23:32:49.450304 1461234 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1127 23:32:49.450381 1461234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1127 23:32:49.517685 1461234 cri.go:89] found id: "4acde5ffc6f5e58cbd575d400616f38f441a2f39dcb5612e040722a56fef7614"
	I1127 23:32:49.517757 1461234 cri.go:89] found id: ""
	I1127 23:32:49.517779 1461234 logs.go:284] 1 containers: [4acde5ffc6f5e58cbd575d400616f38f441a2f39dcb5612e040722a56fef7614]
	I1127 23:32:49.517925 1461234 ssh_runner.go:195] Run: which crictl
	I1127 23:32:49.522627 1461234 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1127 23:32:49.522729 1461234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1127 23:32:49.529710 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:49.580686 1461234 cri.go:89] found id: "9eec496f1ec73e7e42c9ef2866cf724838c0a2f79226dd31ae75423fb329ed9d"
	I1127 23:32:49.580711 1461234 cri.go:89] found id: ""
	I1127 23:32:49.580720 1461234 logs.go:284] 1 containers: [9eec496f1ec73e7e42c9ef2866cf724838c0a2f79226dd31ae75423fb329ed9d]
	I1127 23:32:49.580801 1461234 ssh_runner.go:195] Run: which crictl
	I1127 23:32:49.585318 1461234 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1127 23:32:49.585405 1461234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1127 23:32:49.628229 1461234 cri.go:89] found id: "625e9810ce5fdc84ea9f1dfb08c75ad3dfb50c11fbe1cbe3199bf62104b2cc8a"
	I1127 23:32:49.628253 1461234 cri.go:89] found id: ""
	I1127 23:32:49.628261 1461234 logs.go:284] 1 containers: [625e9810ce5fdc84ea9f1dfb08c75ad3dfb50c11fbe1cbe3199bf62104b2cc8a]
	I1127 23:32:49.628346 1461234 ssh_runner.go:195] Run: which crictl
	I1127 23:32:49.632737 1461234 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1127 23:32:49.632838 1461234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1127 23:32:49.675697 1461234 cri.go:89] found id: "9eaa4515394b6211693689b3b71e75375a645c2dc8cf58ae0c34dffe92e8a53e"
	I1127 23:32:49.675768 1461234 cri.go:89] found id: ""
	I1127 23:32:49.675782 1461234 logs.go:284] 1 containers: [9eaa4515394b6211693689b3b71e75375a645c2dc8cf58ae0c34dffe92e8a53e]
	I1127 23:32:49.675846 1461234 ssh_runner.go:195] Run: which crictl
	I1127 23:32:49.680333 1461234 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1127 23:32:49.680402 1461234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1127 23:32:49.732222 1461234 cri.go:89] found id: "cf56f8af181f7667845cf630a0931a8b5ec982efd7680d6286c08f39bed711a0"
	I1127 23:32:49.732245 1461234 cri.go:89] found id: ""
	I1127 23:32:49.732252 1461234 logs.go:284] 1 containers: [cf56f8af181f7667845cf630a0931a8b5ec982efd7680d6286c08f39bed711a0]
	I1127 23:32:49.732306 1461234 ssh_runner.go:195] Run: which crictl
	I1127 23:32:49.736985 1461234 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1127 23:32:49.737090 1461234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1127 23:32:49.783188 1461234 cri.go:89] found id: "1190f5ab8649a19420911be3fdff74aa915e240d2918bb980cffb6f2bd6120a0"
	I1127 23:32:49.783211 1461234 cri.go:89] found id: ""
	I1127 23:32:49.783219 1461234 logs.go:284] 1 containers: [1190f5ab8649a19420911be3fdff74aa915e240d2918bb980cffb6f2bd6120a0]
	I1127 23:32:49.783273 1461234 ssh_runner.go:195] Run: which crictl
	I1127 23:32:49.788122 1461234 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1127 23:32:49.788190 1461234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1127 23:32:49.813620 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:49.839707 1461234 cri.go:89] found id: "4df224723a061c4322e74c4a6fe0c42b5f6bde92eff76afe7a08b5a4fdc29c87"
	I1127 23:32:49.839730 1461234 cri.go:89] found id: ""
	I1127 23:32:49.839737 1461234 logs.go:284] 1 containers: [4df224723a061c4322e74c4a6fe0c42b5f6bde92eff76afe7a08b5a4fdc29c87]
	I1127 23:32:49.839794 1461234 ssh_runner.go:195] Run: which crictl
	I1127 23:32:49.844039 1461234 logs.go:123] Gathering logs for describe nodes ...
	I1127 23:32:49.844062 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1127 23:32:49.888094 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:49.989599 1461234 logs.go:123] Gathering logs for etcd [9eec496f1ec73e7e42c9ef2866cf724838c0a2f79226dd31ae75423fb329ed9d] ...
	I1127 23:32:49.989628 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9eec496f1ec73e7e42c9ef2866cf724838c0a2f79226dd31ae75423fb329ed9d"
	I1127 23:32:50.031687 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:50.055752 1461234 logs.go:123] Gathering logs for coredns [625e9810ce5fdc84ea9f1dfb08c75ad3dfb50c11fbe1cbe3199bf62104b2cc8a] ...
	I1127 23:32:50.055784 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 625e9810ce5fdc84ea9f1dfb08c75ad3dfb50c11fbe1cbe3199bf62104b2cc8a"
	I1127 23:32:50.115059 1461234 logs.go:123] Gathering logs for kube-scheduler [9eaa4515394b6211693689b3b71e75375a645c2dc8cf58ae0c34dffe92e8a53e] ...
	I1127 23:32:50.115096 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9eaa4515394b6211693689b3b71e75375a645c2dc8cf58ae0c34dffe92e8a53e"
	I1127 23:32:50.164373 1461234 logs.go:123] Gathering logs for kube-controller-manager [1190f5ab8649a19420911be3fdff74aa915e240d2918bb980cffb6f2bd6120a0] ...
	I1127 23:32:50.164402 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1190f5ab8649a19420911be3fdff74aa915e240d2918bb980cffb6f2bd6120a0"
	I1127 23:32:50.272834 1461234 logs.go:123] Gathering logs for container status ...
	I1127 23:32:50.272868 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1127 23:32:50.313201 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:50.330105 1461234 logs.go:123] Gathering logs for kubelet ...
	I1127 23:32:50.330133 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1127 23:32:50.385059 1461234 logs.go:138] Found kubelet problem: Nov 27 23:31:33 addons-606180 kubelet[1349]: W1127 23:31:33.296460    1349 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-606180' and this object
	W1127 23:32:50.385282 1461234 logs.go:138] Found kubelet problem: Nov 27 23:31:33 addons-606180 kubelet[1349]: E1127 23:31:33.296500    1349 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-606180' and this object
	W1127 23:32:50.385466 1461234 logs.go:138] Found kubelet problem: Nov 27 23:31:33 addons-606180 kubelet[1349]: W1127 23:31:33.299356    1349 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-606180' and this object
	W1127 23:32:50.385813 1461234 logs.go:138] Found kubelet problem: Nov 27 23:31:33 addons-606180 kubelet[1349]: E1127 23:31:33.299393    1349 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-606180' and this object
	I1127 23:32:50.388391 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1127 23:32:50.389416 1461234 logs.go:138] Found kubelet problem: Nov 27 23:31:40 addons-606180 kubelet[1349]: W1127 23:31:40.102404    1349 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-606180' and this object
	W1127 23:32:50.389628 1461234 logs.go:138] Found kubelet problem: Nov 27 23:31:40 addons-606180 kubelet[1349]: E1127 23:31:40.102446    1349 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-606180' and this object
	W1127 23:32:50.394508 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: W1127 23:32:07.200155    1349 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-606180" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-606180' and this object
	W1127 23:32:50.394708 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: E1127 23:32:07.200180    1349 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-606180" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-606180' and this object
	W1127 23:32:50.394887 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: W1127 23:32:07.200226    1349 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-606180' and this object
	W1127 23:32:50.395085 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: E1127 23:32:07.200239    1349 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-606180' and this object
	W1127 23:32:50.395266 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: W1127 23:32:07.200276    1349 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-606180' and this object
	W1127 23:32:50.395468 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: E1127 23:32:07.200287    1349 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-606180' and this object
	W1127 23:32:50.395654 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: W1127 23:32:07.200319    1349 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-606180" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-606180' and this object
	W1127 23:32:50.395858 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: E1127 23:32:07.200328    1349 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-606180" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-606180' and this object
	W1127 23:32:50.397066 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: W1127 23:32:07.227825    1349 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-606180' and this object
	W1127 23:32:50.397272 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: E1127 23:32:07.227867    1349 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-606180' and this object
	I1127 23:32:50.431634 1461234 logs.go:123] Gathering logs for dmesg ...
	I1127 23:32:50.431663 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1127 23:32:50.454916 1461234 logs.go:123] Gathering logs for kindnet [4df224723a061c4322e74c4a6fe0c42b5f6bde92eff76afe7a08b5a4fdc29c87] ...
	I1127 23:32:50.454954 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4df224723a061c4322e74c4a6fe0c42b5f6bde92eff76afe7a08b5a4fdc29c87"
	I1127 23:32:50.511592 1461234 logs.go:123] Gathering logs for CRI-O ...
	I1127 23:32:50.511618 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1127 23:32:50.540205 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:50.605721 1461234 logs.go:123] Gathering logs for kube-apiserver [4acde5ffc6f5e58cbd575d400616f38f441a2f39dcb5612e040722a56fef7614] ...
	I1127 23:32:50.605757 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4acde5ffc6f5e58cbd575d400616f38f441a2f39dcb5612e040722a56fef7614"
	I1127 23:32:50.705574 1461234 logs.go:123] Gathering logs for kube-proxy [cf56f8af181f7667845cf630a0931a8b5ec982efd7680d6286c08f39bed711a0] ...
	I1127 23:32:50.705651 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf56f8af181f7667845cf630a0931a8b5ec982efd7680d6286c08f39bed711a0"
	I1127 23:32:50.772676 1461234 out.go:309] Setting ErrFile to fd 2...
	I1127 23:32:50.772754 1461234 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1127 23:32:50.772834 1461234 out.go:239] X Problems detected in kubelet:
	W1127 23:32:50.772878 1461234 out.go:239]   Nov 27 23:32:07 addons-606180 kubelet[1349]: E1127 23:32:07.200287    1349 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-606180' and this object
	W1127 23:32:50.772924 1461234 out.go:239]   Nov 27 23:32:07 addons-606180 kubelet[1349]: W1127 23:32:07.200319    1349 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-606180" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-606180' and this object
	W1127 23:32:50.772963 1461234 out.go:239]   Nov 27 23:32:07 addons-606180 kubelet[1349]: E1127 23:32:07.200328    1349 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-606180" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-606180' and this object
	W1127 23:32:50.773006 1461234 out.go:239]   Nov 27 23:32:07 addons-606180 kubelet[1349]: W1127 23:32:07.227825    1349 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-606180' and this object
	W1127 23:32:50.773045 1461234 out.go:239]   Nov 27 23:32:07 addons-606180 kubelet[1349]: E1127 23:32:07.227867    1349 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-606180' and this object
	I1127 23:32:50.773089 1461234 out.go:309] Setting ErrFile to fd 2...
	I1127 23:32:50.773097 1461234 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:32:50.813642 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:50.887392 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:51.042224 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:51.313805 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:51.388184 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:51.530650 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:51.815231 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:51.888716 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:52.030510 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:52.314366 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:32:52.387762 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:52.529421 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:52.831912 1461234 kapi.go:107] duration metric: took 1m9.561095284s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1127 23:32:52.836949 1461234 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-606180 cluster.
	I1127 23:32:52.840413 1461234 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1127 23:32:52.850333 1461234 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1127 23:32:52.915932 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:53.030938 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:53.392727 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:53.529748 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:53.887843 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:54.030469 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:54.387130 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:54.528936 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:54.887196 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:55.042580 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:55.388245 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:55.530637 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:55.887917 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:56.029289 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:56.387676 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:56.529693 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:56.889175 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:57.030753 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:57.387956 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:57.530343 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:57.887314 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:58.029692 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:58.387631 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:58.529909 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:58.889185 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:59.032321 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:59.389241 1461234 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:32:59.530336 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:32:59.887071 1461234 kapi.go:107] duration metric: took 1m19.033398873s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1127 23:33:00.049933 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:00.529546 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:00.774985 1461234 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1127 23:33:00.785116 1461234 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1127 23:33:00.786505 1461234 api_server.go:141] control plane version: v1.28.4
	I1127 23:33:00.786527 1461234 api_server.go:131] duration metric: took 11.336248285s to wait for apiserver health ...
	I1127 23:33:00.786536 1461234 system_pods.go:43] waiting for kube-system pods to appear ...
	I1127 23:33:00.786555 1461234 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1127 23:33:00.786623 1461234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1127 23:33:00.841912 1461234 cri.go:89] found id: "4acde5ffc6f5e58cbd575d400616f38f441a2f39dcb5612e040722a56fef7614"
	I1127 23:33:00.841933 1461234 cri.go:89] found id: ""
	I1127 23:33:00.841941 1461234 logs.go:284] 1 containers: [4acde5ffc6f5e58cbd575d400616f38f441a2f39dcb5612e040722a56fef7614]
	I1127 23:33:00.841996 1461234 ssh_runner.go:195] Run: which crictl
	I1127 23:33:00.846604 1461234 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1127 23:33:00.846676 1461234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1127 23:33:00.887304 1461234 cri.go:89] found id: "9eec496f1ec73e7e42c9ef2866cf724838c0a2f79226dd31ae75423fb329ed9d"
	I1127 23:33:00.887325 1461234 cri.go:89] found id: ""
	I1127 23:33:00.887332 1461234 logs.go:284] 1 containers: [9eec496f1ec73e7e42c9ef2866cf724838c0a2f79226dd31ae75423fb329ed9d]
	I1127 23:33:00.887387 1461234 ssh_runner.go:195] Run: which crictl
	I1127 23:33:00.892079 1461234 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1127 23:33:00.892155 1461234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1127 23:33:00.937705 1461234 cri.go:89] found id: "625e9810ce5fdc84ea9f1dfb08c75ad3dfb50c11fbe1cbe3199bf62104b2cc8a"
	I1127 23:33:00.937728 1461234 cri.go:89] found id: ""
	I1127 23:33:00.937736 1461234 logs.go:284] 1 containers: [625e9810ce5fdc84ea9f1dfb08c75ad3dfb50c11fbe1cbe3199bf62104b2cc8a]
	I1127 23:33:00.937792 1461234 ssh_runner.go:195] Run: which crictl
	I1127 23:33:00.942238 1461234 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1127 23:33:00.942312 1461234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1127 23:33:00.985469 1461234 cri.go:89] found id: "9eaa4515394b6211693689b3b71e75375a645c2dc8cf58ae0c34dffe92e8a53e"
	I1127 23:33:00.985493 1461234 cri.go:89] found id: ""
	I1127 23:33:00.985505 1461234 logs.go:284] 1 containers: [9eaa4515394b6211693689b3b71e75375a645c2dc8cf58ae0c34dffe92e8a53e]
	I1127 23:33:00.985590 1461234 ssh_runner.go:195] Run: which crictl
	I1127 23:33:00.990469 1461234 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1127 23:33:00.990547 1461234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1127 23:33:01.029346 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:01.044395 1461234 cri.go:89] found id: "cf56f8af181f7667845cf630a0931a8b5ec982efd7680d6286c08f39bed711a0"
	I1127 23:33:01.044418 1461234 cri.go:89] found id: ""
	I1127 23:33:01.044426 1461234 logs.go:284] 1 containers: [cf56f8af181f7667845cf630a0931a8b5ec982efd7680d6286c08f39bed711a0]
	I1127 23:33:01.044494 1461234 ssh_runner.go:195] Run: which crictl
	I1127 23:33:01.049155 1461234 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1127 23:33:01.049229 1461234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1127 23:33:01.094734 1461234 cri.go:89] found id: "1190f5ab8649a19420911be3fdff74aa915e240d2918bb980cffb6f2bd6120a0"
	I1127 23:33:01.094797 1461234 cri.go:89] found id: ""
	I1127 23:33:01.094819 1461234 logs.go:284] 1 containers: [1190f5ab8649a19420911be3fdff74aa915e240d2918bb980cffb6f2bd6120a0]
	I1127 23:33:01.094885 1461234 ssh_runner.go:195] Run: which crictl
	I1127 23:33:01.099465 1461234 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1127 23:33:01.099552 1461234 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1127 23:33:01.144061 1461234 cri.go:89] found id: "4df224723a061c4322e74c4a6fe0c42b5f6bde92eff76afe7a08b5a4fdc29c87"
	I1127 23:33:01.144085 1461234 cri.go:89] found id: ""
	I1127 23:33:01.144093 1461234 logs.go:284] 1 containers: [4df224723a061c4322e74c4a6fe0c42b5f6bde92eff76afe7a08b5a4fdc29c87]
	I1127 23:33:01.144151 1461234 ssh_runner.go:195] Run: which crictl
	I1127 23:33:01.148863 1461234 logs.go:123] Gathering logs for CRI-O ...
	I1127 23:33:01.148889 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1127 23:33:01.254574 1461234 logs.go:123] Gathering logs for kubelet ...
	I1127 23:33:01.254617 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1127 23:33:01.315152 1461234 logs.go:138] Found kubelet problem: Nov 27 23:31:33 addons-606180 kubelet[1349]: W1127 23:31:33.296460    1349 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-606180' and this object
	W1127 23:33:01.315368 1461234 logs.go:138] Found kubelet problem: Nov 27 23:31:33 addons-606180 kubelet[1349]: E1127 23:31:33.296500    1349 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-606180' and this object
	W1127 23:33:01.315556 1461234 logs.go:138] Found kubelet problem: Nov 27 23:31:33 addons-606180 kubelet[1349]: W1127 23:31:33.299356    1349 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-606180' and this object
	W1127 23:33:01.315764 1461234 logs.go:138] Found kubelet problem: Nov 27 23:31:33 addons-606180 kubelet[1349]: E1127 23:31:33.299393    1349 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-606180' and this object
	W1127 23:33:01.319124 1461234 logs.go:138] Found kubelet problem: Nov 27 23:31:40 addons-606180 kubelet[1349]: W1127 23:31:40.102404    1349 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-606180' and this object
	W1127 23:33:01.319331 1461234 logs.go:138] Found kubelet problem: Nov 27 23:31:40 addons-606180 kubelet[1349]: E1127 23:31:40.102446    1349 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-606180' and this object
	W1127 23:33:01.323911 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: W1127 23:32:07.200155    1349 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-606180" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-606180' and this object
	W1127 23:33:01.324099 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: E1127 23:32:07.200180    1349 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-606180" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-606180' and this object
	W1127 23:33:01.324284 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: W1127 23:32:07.200226    1349 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-606180' and this object
	W1127 23:33:01.324487 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: E1127 23:32:07.200239    1349 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-606180' and this object
	W1127 23:33:01.324673 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: W1127 23:32:07.200276    1349 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-606180' and this object
	W1127 23:33:01.324879 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: E1127 23:32:07.200287    1349 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-606180' and this object
	W1127 23:33:01.325068 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: W1127 23:32:07.200319    1349 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-606180" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-606180' and this object
	W1127 23:33:01.325284 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: E1127 23:32:07.200328    1349 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-606180" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-606180' and this object
	W1127 23:33:01.326526 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: W1127 23:32:07.227825    1349 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-606180' and this object
	W1127 23:33:01.326737 1461234 logs.go:138] Found kubelet problem: Nov 27 23:32:07 addons-606180 kubelet[1349]: E1127 23:32:07.227867    1349 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-606180' and this object
	I1127 23:33:01.363847 1461234 logs.go:123] Gathering logs for kube-apiserver [4acde5ffc6f5e58cbd575d400616f38f441a2f39dcb5612e040722a56fef7614] ...
	I1127 23:33:01.363886 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4acde5ffc6f5e58cbd575d400616f38f441a2f39dcb5612e040722a56fef7614"
	I1127 23:33:01.425963 1461234 logs.go:123] Gathering logs for etcd [9eec496f1ec73e7e42c9ef2866cf724838c0a2f79226dd31ae75423fb329ed9d] ...
	I1127 23:33:01.425997 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9eec496f1ec73e7e42c9ef2866cf724838c0a2f79226dd31ae75423fb329ed9d"
	I1127 23:33:01.481042 1461234 logs.go:123] Gathering logs for kube-scheduler [9eaa4515394b6211693689b3b71e75375a645c2dc8cf58ae0c34dffe92e8a53e] ...
	I1127 23:33:01.481085 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9eaa4515394b6211693689b3b71e75375a645c2dc8cf58ae0c34dffe92e8a53e"
	I1127 23:33:01.523100 1461234 logs.go:123] Gathering logs for kube-proxy [cf56f8af181f7667845cf630a0931a8b5ec982efd7680d6286c08f39bed711a0] ...
	I1127 23:33:01.523128 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf56f8af181f7667845cf630a0931a8b5ec982efd7680d6286c08f39bed711a0"
	I1127 23:33:01.531765 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:01.569140 1461234 logs.go:123] Gathering logs for kube-controller-manager [1190f5ab8649a19420911be3fdff74aa915e240d2918bb980cffb6f2bd6120a0] ...
	I1127 23:33:01.569168 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1190f5ab8649a19420911be3fdff74aa915e240d2918bb980cffb6f2bd6120a0"
	I1127 23:33:01.659270 1461234 logs.go:123] Gathering logs for kindnet [4df224723a061c4322e74c4a6fe0c42b5f6bde92eff76afe7a08b5a4fdc29c87] ...
	I1127 23:33:01.659321 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4df224723a061c4322e74c4a6fe0c42b5f6bde92eff76afe7a08b5a4fdc29c87"
	I1127 23:33:01.702899 1461234 logs.go:123] Gathering logs for dmesg ...
	I1127 23:33:01.702937 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1127 23:33:01.725512 1461234 logs.go:123] Gathering logs for describe nodes ...
	I1127 23:33:01.725595 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1127 23:33:01.902604 1461234 logs.go:123] Gathering logs for coredns [625e9810ce5fdc84ea9f1dfb08c75ad3dfb50c11fbe1cbe3199bf62104b2cc8a] ...
	I1127 23:33:01.902635 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 625e9810ce5fdc84ea9f1dfb08c75ad3dfb50c11fbe1cbe3199bf62104b2cc8a"
	I1127 23:33:01.949543 1461234 logs.go:123] Gathering logs for container status ...
	I1127 23:33:01.949575 1461234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1127 23:33:02.014722 1461234 out.go:309] Setting ErrFile to fd 2...
	I1127 23:33:02.014753 1461234 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1127 23:33:02.014811 1461234 out.go:239] X Problems detected in kubelet:
	W1127 23:33:02.014825 1461234 out.go:239]   Nov 27 23:32:07 addons-606180 kubelet[1349]: E1127 23:32:07.200287    1349 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-606180' and this object
	W1127 23:33:02.014832 1461234 out.go:239]   Nov 27 23:32:07 addons-606180 kubelet[1349]: W1127 23:32:07.200319    1349 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-606180" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-606180' and this object
	W1127 23:33:02.014841 1461234 out.go:239]   Nov 27 23:32:07 addons-606180 kubelet[1349]: E1127 23:32:07.200328    1349 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-606180" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-606180' and this object
	W1127 23:33:02.014848 1461234 out.go:239]   Nov 27 23:32:07 addons-606180 kubelet[1349]: W1127 23:32:07.227825    1349 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-606180' and this object
	W1127 23:33:02.014893 1461234 out.go:239]   Nov 27 23:32:07 addons-606180 kubelet[1349]: E1127 23:32:07.227867    1349 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-606180" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-606180' and this object
	I1127 23:33:02.014903 1461234 out.go:309] Setting ErrFile to fd 2...
	I1127 23:33:02.014909 1461234 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:33:02.029180 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:02.528950 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:03.030256 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:03.528943 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:04.030046 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:04.529469 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:05.029822 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:05.529275 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:06.029823 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:06.529689 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:07.028798 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:07.529817 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:08.030023 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:08.529597 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:09.029675 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:09.529595 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:10.033618 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:10.529209 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:11.029529 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:11.529414 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:12.028053 1461234 system_pods.go:59] 18 kube-system pods found
	I1127 23:33:12.028100 1461234 system_pods.go:61] "coredns-5dd5756b68-l4fd2" [385f55cc-f543-4ad9-812c-10530c0d79b0] Running
	I1127 23:33:12.028108 1461234 system_pods.go:61] "csi-hostpath-attacher-0" [76d2c097-f0f2-4bfc-a05f-ff3745477118] Running
	I1127 23:33:12.028113 1461234 system_pods.go:61] "csi-hostpath-resizer-0" [40db1436-2767-450d-89c1-1cb3a22bdfa6] Running
	I1127 23:33:12.028118 1461234 system_pods.go:61] "csi-hostpathplugin-xl8vb" [a7da2290-5804-475d-b717-6e4442f455fa] Running
	I1127 23:33:12.028125 1461234 system_pods.go:61] "etcd-addons-606180" [665e7324-fe65-401e-ac36-8114b4b3111a] Running
	I1127 23:33:12.028130 1461234 system_pods.go:61] "kindnet-j2rwb" [d51e8be5-449b-43c4-9934-a5b251c23423] Running
	I1127 23:33:12.028136 1461234 system_pods.go:61] "kube-apiserver-addons-606180" [d12c5fc1-6b96-4c2d-8e00-3983bca3bfe0] Running
	I1127 23:33:12.028141 1461234 system_pods.go:61] "kube-controller-manager-addons-606180" [61191a9f-b07f-4b63-adec-57e01f976ffb] Running
	I1127 23:33:12.028156 1461234 system_pods.go:61] "kube-ingress-dns-minikube" [1224d62a-dc95-4202-b8c8-955e70d28dac] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1127 23:33:12.028169 1461234 system_pods.go:61] "kube-proxy-jp57p" [bdd9dff8-93c4-4ce8-815f-5b46ae700e8b] Running
	I1127 23:33:12.028175 1461234 system_pods.go:61] "kube-scheduler-addons-606180" [e37dd4cd-80d8-4a57-b184-09ea3dda8a4e] Running
	I1127 23:33:12.028187 1461234 system_pods.go:61] "metrics-server-7c66d45ddc-958nm" [1fbde937-4bc9-42bd-9d82-4139b59c9660] Running
	I1127 23:33:12.028194 1461234 system_pods.go:61] "nvidia-device-plugin-daemonset-g52cs" [36e3cc61-cb27-466b-b53f-7c52daf2d850] Running
	I1127 23:33:12.028203 1461234 system_pods.go:61] "registry-proxy-hdjm6" [018a3f9c-a04e-43dd-b703-e873cea20fb0] Running
	I1127 23:33:12.028208 1461234 system_pods.go:61] "registry-pwpqm" [374cdeaf-b970-4ff9-bb78-cb2fc4f63693] Running
	I1127 23:33:12.028213 1461234 system_pods.go:61] "snapshot-controller-58dbcc7b99-8gmlv" [c2725314-697a-4663-96d3-4d45e312a4e4] Running
	I1127 23:33:12.028223 1461234 system_pods.go:61] "snapshot-controller-58dbcc7b99-l5gzr" [f30d7fa0-92f6-455b-8796-7b542e5e47a8] Running
	I1127 23:33:12.028228 1461234 system_pods.go:61] "storage-provisioner" [32e0c4c3-cd98-46ac-9d35-4c3462add8a1] Running
	I1127 23:33:12.028235 1461234 system_pods.go:74] duration metric: took 11.241693545s to wait for pod list to return data ...
	I1127 23:33:12.028249 1461234 default_sa.go:34] waiting for default service account to be created ...
	I1127 23:33:12.030512 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:12.031852 1461234 default_sa.go:45] found service account: "default"
	I1127 23:33:12.031879 1461234 default_sa.go:55] duration metric: took 3.622634ms for default service account to be created ...
	I1127 23:33:12.031890 1461234 system_pods.go:116] waiting for k8s-apps to be running ...
	I1127 23:33:12.042440 1461234 system_pods.go:86] 18 kube-system pods found
	I1127 23:33:12.042492 1461234 system_pods.go:89] "coredns-5dd5756b68-l4fd2" [385f55cc-f543-4ad9-812c-10530c0d79b0] Running
	I1127 23:33:12.042502 1461234 system_pods.go:89] "csi-hostpath-attacher-0" [76d2c097-f0f2-4bfc-a05f-ff3745477118] Running
	I1127 23:33:12.042513 1461234 system_pods.go:89] "csi-hostpath-resizer-0" [40db1436-2767-450d-89c1-1cb3a22bdfa6] Running
	I1127 23:33:12.042543 1461234 system_pods.go:89] "csi-hostpathplugin-xl8vb" [a7da2290-5804-475d-b717-6e4442f455fa] Running
	I1127 23:33:12.042558 1461234 system_pods.go:89] "etcd-addons-606180" [665e7324-fe65-401e-ac36-8114b4b3111a] Running
	I1127 23:33:12.042565 1461234 system_pods.go:89] "kindnet-j2rwb" [d51e8be5-449b-43c4-9934-a5b251c23423] Running
	I1127 23:33:12.042571 1461234 system_pods.go:89] "kube-apiserver-addons-606180" [d12c5fc1-6b96-4c2d-8e00-3983bca3bfe0] Running
	I1127 23:33:12.042576 1461234 system_pods.go:89] "kube-controller-manager-addons-606180" [61191a9f-b07f-4b63-adec-57e01f976ffb] Running
	I1127 23:33:12.042593 1461234 system_pods.go:89] "kube-ingress-dns-minikube" [1224d62a-dc95-4202-b8c8-955e70d28dac] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1127 23:33:12.042600 1461234 system_pods.go:89] "kube-proxy-jp57p" [bdd9dff8-93c4-4ce8-815f-5b46ae700e8b] Running
	I1127 23:33:12.042635 1461234 system_pods.go:89] "kube-scheduler-addons-606180" [e37dd4cd-80d8-4a57-b184-09ea3dda8a4e] Running
	I1127 23:33:12.042649 1461234 system_pods.go:89] "metrics-server-7c66d45ddc-958nm" [1fbde937-4bc9-42bd-9d82-4139b59c9660] Running
	I1127 23:33:12.042656 1461234 system_pods.go:89] "nvidia-device-plugin-daemonset-g52cs" [36e3cc61-cb27-466b-b53f-7c52daf2d850] Running
	I1127 23:33:12.042661 1461234 system_pods.go:89] "registry-proxy-hdjm6" [018a3f9c-a04e-43dd-b703-e873cea20fb0] Running
	I1127 23:33:12.042669 1461234 system_pods.go:89] "registry-pwpqm" [374cdeaf-b970-4ff9-bb78-cb2fc4f63693] Running
	I1127 23:33:12.042674 1461234 system_pods.go:89] "snapshot-controller-58dbcc7b99-8gmlv" [c2725314-697a-4663-96d3-4d45e312a4e4] Running
	I1127 23:33:12.042679 1461234 system_pods.go:89] "snapshot-controller-58dbcc7b99-l5gzr" [f30d7fa0-92f6-455b-8796-7b542e5e47a8] Running
	I1127 23:33:12.042686 1461234 system_pods.go:89] "storage-provisioner" [32e0c4c3-cd98-46ac-9d35-4c3462add8a1] Running
	I1127 23:33:12.042694 1461234 system_pods.go:126] duration metric: took 10.797842ms to wait for k8s-apps to be running ...
	I1127 23:33:12.042727 1461234 system_svc.go:44] waiting for kubelet service to be running ....
	I1127 23:33:12.042808 1461234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:33:12.058064 1461234 system_svc.go:56] duration metric: took 15.33766ms WaitForService to wait for kubelet.
	I1127 23:33:12.058110 1461234 kubeadm.go:581] duration metric: took 1m37.356975284s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1127 23:33:12.058153 1461234 node_conditions.go:102] verifying NodePressure condition ...
	I1127 23:33:12.061847 1461234 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1127 23:33:12.061916 1461234 node_conditions.go:123] node cpu capacity is 2
	I1127 23:33:12.061929 1461234 node_conditions.go:105] duration metric: took 3.770415ms to run NodePressure ...
	I1127 23:33:12.061969 1461234 start.go:228] waiting for startup goroutines ...
	I1127 23:33:12.529267 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:13.029520 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:13.532866 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:14.031746 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:14.530493 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:15.034028 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:15.529462 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:16.030736 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:16.530956 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:17.030484 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:17.530001 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:18.030046 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:18.530111 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:19.029559 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:19.529704 1461234 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:33:20.031091 1461234 kapi.go:107] duration metric: took 1m39.570316135s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1127 23:33:20.033188 1461234 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, cloud-spanner, default-storageclass, storage-provisioner, inspektor-gadget, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1127 23:33:20.034842 1461234 addons.go:502] enable addons completed in 1m45.782308027s: enabled=[ingress-dns nvidia-device-plugin cloud-spanner default-storageclass storage-provisioner inspektor-gadget metrics-server storage-provisioner-rancher volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1127 23:33:20.034917 1461234 start.go:233] waiting for cluster config update ...
	I1127 23:33:20.034944 1461234 start.go:242] writing updated cluster config ...
	I1127 23:33:20.035321 1461234 ssh_runner.go:195] Run: rm -f paused
	I1127 23:33:20.360979 1461234 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1127 23:33:20.363276 1461234 out.go:177] * Done! kubectl is now configured to use "addons-606180" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 27 23:37:18 addons-606180 crio[886]: time="2023-11-27 23:37:18.790210733Z" level=info msg="Stopped pod sandbox: e462317d59bbb85d9a9894419b5a9863770ad0e0e6f2c6f79c345ddcd0dc69e4" id=51d9b26d-8665-4df2-866a-4e7a7bd9cca1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 27 23:37:18 addons-606180 crio[886]: time="2023-11-27 23:37:18.851646894Z" level=info msg="Removing container: 9177accc2b014c6fa7d330a32585c6ab35fdf3524dddace6c606449c2233c5c2" id=eef230d6-96a3-4bb1-961e-4c1165bf2740 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 27 23:37:18 addons-606180 crio[886]: time="2023-11-27 23:37:18.895174083Z" level=info msg="Removed container 9177accc2b014c6fa7d330a32585c6ab35fdf3524dddace6c606449c2233c5c2: default/hello-world-app-5d77478584-tkzmg/hello-world-app" id=eef230d6-96a3-4bb1-961e-4c1165bf2740 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 27 23:37:18 addons-606180 crio[886]: time="2023-11-27 23:37:18.896818840Z" level=info msg="Removing container: ae5fc44c781a614b6b8b0c4fe64ac9899105a67437bcd6fb8881d51c779d21f6" id=651216dc-9895-462d-9c3c-2f7eda0bc9f7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 27 23:37:18 addons-606180 crio[886]: time="2023-11-27 23:37:18.919354263Z" level=info msg="Removed container ae5fc44c781a614b6b8b0c4fe64ac9899105a67437bcd6fb8881d51c779d21f6: ingress-nginx/ingress-nginx-controller-7c6974c4d8-hmzsr/controller" id=651216dc-9895-462d-9c3c-2f7eda0bc9f7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 27 23:37:22 addons-606180 crio[886]: time="2023-11-27 23:37:22.266448328Z" level=info msg="Removing container: d52d358cd36c9101d980006629b169f8bc9b0f86d87fb64037a6788a2f5a717d" id=dfdcb37a-7993-458e-9b35-8c2ec8996c2c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 27 23:37:22 addons-606180 crio[886]: time="2023-11-27 23:37:22.286767791Z" level=info msg="Removed container d52d358cd36c9101d980006629b169f8bc9b0f86d87fb64037a6788a2f5a717d: ingress-nginx/ingress-nginx-admission-patch-kdqx5/patch" id=dfdcb37a-7993-458e-9b35-8c2ec8996c2c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 27 23:37:22 addons-606180 crio[886]: time="2023-11-27 23:37:22.288729730Z" level=info msg="Removing container: 8a6cded543ff95c1ac743ef97b9893d52a90a4e750989a938fec7f3c817e2201" id=03579059-ffa0-4870-b8e7-aa85b76a4073 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 27 23:37:22 addons-606180 crio[886]: time="2023-11-27 23:37:22.326575162Z" level=info msg="Removed container 8a6cded543ff95c1ac743ef97b9893d52a90a4e750989a938fec7f3c817e2201: ingress-nginx/ingress-nginx-admission-create-vlmh8/create" id=03579059-ffa0-4870-b8e7-aa85b76a4073 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 27 23:37:22 addons-606180 crio[886]: time="2023-11-27 23:37:22.327903443Z" level=info msg="Stopping pod sandbox: 477b43ceddbb84e4b6ef1155f1d5cfe31b61f2b111987442f93393fd210d9a73" id=1d2703c6-ecb6-4ea3-93e2-939efb0811b3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 27 23:37:22 addons-606180 crio[886]: time="2023-11-27 23:37:22.327948800Z" level=info msg="Stopped pod sandbox (already stopped): 477b43ceddbb84e4b6ef1155f1d5cfe31b61f2b111987442f93393fd210d9a73" id=1d2703c6-ecb6-4ea3-93e2-939efb0811b3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 27 23:37:22 addons-606180 crio[886]: time="2023-11-27 23:37:22.328393653Z" level=info msg="Removing pod sandbox: 477b43ceddbb84e4b6ef1155f1d5cfe31b61f2b111987442f93393fd210d9a73" id=b0ea0c58-c31a-42e9-bf31-d06ab1b583c4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 27 23:37:22 addons-606180 crio[886]: time="2023-11-27 23:37:22.336937722Z" level=info msg="Removed pod sandbox: 477b43ceddbb84e4b6ef1155f1d5cfe31b61f2b111987442f93393fd210d9a73" id=b0ea0c58-c31a-42e9-bf31-d06ab1b583c4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 27 23:37:22 addons-606180 crio[886]: time="2023-11-27 23:37:22.337444654Z" level=info msg="Stopping pod sandbox: e462317d59bbb85d9a9894419b5a9863770ad0e0e6f2c6f79c345ddcd0dc69e4" id=2e55c7ba-e41a-4903-9175-8e140dad2b83 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 27 23:37:22 addons-606180 crio[886]: time="2023-11-27 23:37:22.337476555Z" level=info msg="Stopped pod sandbox (already stopped): e462317d59bbb85d9a9894419b5a9863770ad0e0e6f2c6f79c345ddcd0dc69e4" id=2e55c7ba-e41a-4903-9175-8e140dad2b83 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 27 23:37:22 addons-606180 crio[886]: time="2023-11-27 23:37:22.337750669Z" level=info msg="Removing pod sandbox: e462317d59bbb85d9a9894419b5a9863770ad0e0e6f2c6f79c345ddcd0dc69e4" id=b0163fcf-4da7-48c5-8d3f-60eedeed2c76 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 27 23:37:22 addons-606180 crio[886]: time="2023-11-27 23:37:22.345712575Z" level=info msg="Removed pod sandbox: e462317d59bbb85d9a9894419b5a9863770ad0e0e6f2c6f79c345ddcd0dc69e4" id=b0163fcf-4da7-48c5-8d3f-60eedeed2c76 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 27 23:37:22 addons-606180 crio[886]: time="2023-11-27 23:37:22.346223388Z" level=info msg="Stopping pod sandbox: 46968edbfdb3561a6c436572ad4cfc9a1f1a16a0983ca2528bb86572f95c375d" id=8855afae-2075-40b8-b614-f45f26535b3d name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 27 23:37:22 addons-606180 crio[886]: time="2023-11-27 23:37:22.346263174Z" level=info msg="Stopped pod sandbox (already stopped): 46968edbfdb3561a6c436572ad4cfc9a1f1a16a0983ca2528bb86572f95c375d" id=8855afae-2075-40b8-b614-f45f26535b3d name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 27 23:37:22 addons-606180 crio[886]: time="2023-11-27 23:37:22.346597480Z" level=info msg="Removing pod sandbox: 46968edbfdb3561a6c436572ad4cfc9a1f1a16a0983ca2528bb86572f95c375d" id=ab2e23ff-354e-4ab5-b303-1cc297ab1a96 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 27 23:37:22 addons-606180 crio[886]: time="2023-11-27 23:37:22.354485294Z" level=info msg="Removed pod sandbox: 46968edbfdb3561a6c436572ad4cfc9a1f1a16a0983ca2528bb86572f95c375d" id=ab2e23ff-354e-4ab5-b303-1cc297ab1a96 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 27 23:37:22 addons-606180 crio[886]: time="2023-11-27 23:37:22.355138399Z" level=info msg="Stopping pod sandbox: c1a2496f5e84d251128681f4d225b908e7d5ed64fecad77f775bd65b2b311b31" id=da6a885d-4da4-4854-b016-af7cf6dbcbb0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 27 23:37:22 addons-606180 crio[886]: time="2023-11-27 23:37:22.355184987Z" level=info msg="Stopped pod sandbox (already stopped): c1a2496f5e84d251128681f4d225b908e7d5ed64fecad77f775bd65b2b311b31" id=da6a885d-4da4-4854-b016-af7cf6dbcbb0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 27 23:37:22 addons-606180 crio[886]: time="2023-11-27 23:37:22.355565258Z" level=info msg="Removing pod sandbox: c1a2496f5e84d251128681f4d225b908e7d5ed64fecad77f775bd65b2b311b31" id=a92e1cac-dbc2-4c21-851c-40a26a1aa7bf name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 27 23:37:22 addons-606180 crio[886]: time="2023-11-27 23:37:22.363967782Z" level=info msg="Removed pod sandbox: c1a2496f5e84d251128681f4d225b908e7d5ed64fecad77f775bd65b2b311b31" id=a92e1cac-dbc2-4c21-851c-40a26a1aa7bf name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                          CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f9ca17ae9247a       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                               6 seconds ago       Exited              hello-world-app           2                   094246ba59326       hello-world-app-5d77478584-tkzmg
	365d16d4b6479       docker.io/library/nginx@sha256:b7537eea6ffa4f00aac311f16654b50736328eb370208c68b6649a97b7a2724b                2 minutes ago       Running             nginx                     0                   f0ed10ff9a53c       nginx
	ed30c08e4e4b9       ghcr.io/headlamp-k8s/headlamp@sha256:7a9587036bd29304f8f1387a7245556a3c479434670b2ca58e3624d44d2a68c9          3 minutes ago       Running             headlamp                  0                   f52aee2a3debd       headlamp-777fd4b855-98cdp
	263e7fbb3c164       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa   4 minutes ago       Running             gcp-auth                  0                   92361a7191dd9       gcp-auth-d4c87556c-94zg9
	625e9810ce5fd       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                               5 minutes ago       Running             coredns                   0                   71e95a611f525       coredns-5dd5756b68-l4fd2
	c375368fdb7ae       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                               5 minutes ago       Running             storage-provisioner       0                   5b874ff1de065       storage-provisioner
	cf56f8af181f7       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                               5 minutes ago       Running             kube-proxy                0                   0d0e27eb0c4d3       kube-proxy-jp57p
	4df224723a061       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                               5 minutes ago       Running             kindnet-cni               0                   11f488212ae99       kindnet-j2rwb
	9eec496f1ec73       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                               6 minutes ago       Running             etcd                      0                   ddda8ee8f1795       etcd-addons-606180
	9eaa4515394b6       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                               6 minutes ago       Running             kube-scheduler            0                   1b067edc6a50f       kube-scheduler-addons-606180
	4acde5ffc6f5e       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                               6 minutes ago       Running             kube-apiserver            0                   8705b6ba4215c       kube-apiserver-addons-606180
	1190f5ab8649a       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                               6 minutes ago       Running             kube-controller-manager   0                   ede30ad4722e1       kube-controller-manager-addons-606180
	
	* 
	* ==> coredns [625e9810ce5fdc84ea9f1dfb08c75ad3dfb50c11fbe1cbe3199bf62104b2cc8a] <==
	* [INFO] 10.244.0.19:44401 - 65492 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000068454s
	[INFO] 10.244.0.19:44401 - 47914 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000077374s
	[INFO] 10.244.0.19:44401 - 503 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000092668s
	[INFO] 10.244.0.19:44401 - 57381 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000066929s
	[INFO] 10.244.0.19:44401 - 62763 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001305019s
	[INFO] 10.244.0.19:44401 - 37979 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001329495s
	[INFO] 10.244.0.19:44401 - 37576 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000085808s
	[INFO] 10.244.0.19:42986 - 42533 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000132248s
	[INFO] 10.244.0.19:58209 - 20723 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000051118s
	[INFO] 10.244.0.19:42986 - 4334 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000135333s
	[INFO] 10.244.0.19:58209 - 21305 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000256366s
	[INFO] 10.244.0.19:42986 - 29462 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000047761s
	[INFO] 10.244.0.19:58209 - 12419 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000031138s
	[INFO] 10.244.0.19:42986 - 28731 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000054121s
	[INFO] 10.244.0.19:58209 - 8986 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045054s
	[INFO] 10.244.0.19:42986 - 8077 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000085234s
	[INFO] 10.244.0.19:58209 - 7983 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038498s
	[INFO] 10.244.0.19:58209 - 53733 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00005664s
	[INFO] 10.244.0.19:42986 - 33721 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041312s
	[INFO] 10.244.0.19:42986 - 47896 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001278968s
	[INFO] 10.244.0.19:58209 - 5637 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001499913s
	[INFO] 10.244.0.19:58209 - 30478 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000943333s
	[INFO] 10.244.0.19:42986 - 55330 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001115902s
	[INFO] 10.244.0.19:58209 - 1834 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000058649s
	[INFO] 10.244.0.19:42986 - 31287 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000110038s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-606180
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-606180
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45
	                    minikube.k8s.io/name=addons-606180
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_27T23_31_22_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-606180
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Nov 2023 23:31:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-606180
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Nov 2023 23:37:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Nov 2023 23:34:56 +0000   Mon, 27 Nov 2023 23:31:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Nov 2023 23:34:56 +0000   Mon, 27 Nov 2023 23:31:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Nov 2023 23:34:56 +0000   Mon, 27 Nov 2023 23:31:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Nov 2023 23:34:56 +0000   Mon, 27 Nov 2023 23:32:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-606180
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 0b7562edf4c64f10834e6e460a9f8b0f
	  System UUID:                ac6fbf0a-4b6b-4d00-b885-f14e4dacf446
	  Boot ID:                    eb10cf4d-5884-4052-85dd-9e7b7999f82d
	  Kernel Version:             5.15.0-1050-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-tkzmg         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  gcp-auth                    gcp-auth-d4c87556c-94zg9                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m41s
	  headlamp                    headlamp-777fd4b855-98cdp                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m15s
	  kube-system                 coredns-5dd5756b68-l4fd2                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m50s
	  kube-system                 etcd-addons-606180                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m2s
	  kube-system                 kindnet-j2rwb                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m51s
	  kube-system                 kube-apiserver-addons-606180             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  kube-system                 kube-controller-manager-addons-606180    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  kube-system                 kube-proxy-jp57p                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	  kube-system                 kube-scheduler-addons-606180             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m44s  kube-proxy       
	  Normal  Starting                 6m3s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m3s   kubelet          Node addons-606180 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m3s   kubelet          Node addons-606180 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m3s   kubelet          Node addons-606180 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m51s  node-controller  Node addons-606180 event: Registered Node addons-606180 in Controller
	  Normal  NodeReady                5m17s  kubelet          Node addons-606180 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001116] FS-Cache: N-key=[8] '81d5c90000000000'
	[  +0.003087] FS-Cache: Duplicate cookie detected
	[  +0.000723] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.001030] FS-Cache: O-cookie d=00000000fe658175{9p.inode} n=000000004dfe655c
	[  +0.001167] FS-Cache: O-key=[8] '81d5c90000000000'
	[  +0.000748] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000974] FS-Cache: N-cookie d=00000000fe658175{9p.inode} n=00000000d6c55197
	[  +0.001089] FS-Cache: N-key=[8] '81d5c90000000000'
	[  +3.372327] FS-Cache: Duplicate cookie detected
	[  +0.000762] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000996] FS-Cache: O-cookie d=00000000fe658175{9p.inode} n=000000005ad38b61
	[  +0.001079] FS-Cache: O-key=[8] '80d5c90000000000'
	[  +0.000757] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001008] FS-Cache: N-cookie d=00000000fe658175{9p.inode} n=0000000093b40df3
	[  +0.001066] FS-Cache: N-key=[8] '80d5c90000000000'
	[  +0.388699] FS-Cache: Duplicate cookie detected
	[  +0.000756] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001038] FS-Cache: O-cookie d=00000000fe658175{9p.inode} n=00000000a0c32ce7
	[  +0.001085] FS-Cache: O-key=[8] '86d5c90000000000'
	[  +0.000729] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000954] FS-Cache: N-cookie d=00000000fe658175{9p.inode} n=00000000dca046b6
	[  +0.001139] FS-Cache: N-key=[8] '86d5c90000000000'
	[Nov27 22:55] systemd-journald[139]: Failed to send stream file descriptor to service manager: Connection refused
	[Nov27 23:02] systemd-journald[223]: Failed to send stream file descriptor to service manager: Connection refused
	[Nov27 23:03] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	
	* 
	* ==> etcd [9eec496f1ec73e7e42c9ef2866cf724838c0a2f79226dd31ae75423fb329ed9d] <==
	* {"level":"info","ts":"2023-11-27T23:31:15.730027Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-27T23:31:15.734052Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-606180 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-27T23:31:15.73421Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-27T23:31:15.735233Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-11-27T23:31:15.735333Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-27T23:31:15.735742Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-27T23:31:15.736248Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-27T23:31:15.735613Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-27T23:31:15.738702Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-27T23:31:15.746378Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-27T23:31:15.747088Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-27T23:31:35.715761Z","caller":"traceutil/trace.go:171","msg":"trace[1736014750] transaction","detail":"{read_only:false; response_revision:364; number_of_response:1; }","duration":"113.311634ms","start":"2023-11-27T23:31:35.602435Z","end":"2023-11-27T23:31:35.715746Z","steps":["trace[1736014750] 'process raft request'  (duration: 113.224734ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T23:31:36.138039Z","caller":"traceutil/trace.go:171","msg":"trace[1151142694] transaction","detail":"{read_only:false; response_revision:365; number_of_response:1; }","duration":"118.231738ms","start":"2023-11-27T23:31:36.01979Z","end":"2023-11-27T23:31:36.138022Z","steps":["trace[1151142694] 'process raft request'  (duration: 82.126883ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T23:31:36.254209Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.832384ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128025441256248926 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-jp57p.179b9ecedfb278d3\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-jp57p.179b9ecedfb278d3\" value_size:634 lease:8128025441256248621 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-11-27T23:31:36.2682Z","caller":"traceutil/trace.go:171","msg":"trace[1671104341] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"149.618055ms","start":"2023-11-27T23:31:36.118563Z","end":"2023-11-27T23:31:36.268181Z","steps":["trace[1671104341] 'process raft request'  (duration: 19.437926ms)","trace[1671104341] 'compare'  (duration: 96.803102ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-27T23:31:36.716765Z","caller":"traceutil/trace.go:171","msg":"trace[1173001461] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"138.993303ms","start":"2023-11-27T23:31:36.57775Z","end":"2023-11-27T23:31:36.716744Z","steps":["trace[1173001461] 'process raft request'  (duration: 51.061121ms)","trace[1173001461] 'compare'  (duration: 81.209238ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-27T23:31:37.392794Z","caller":"traceutil/trace.go:171","msg":"trace[1370585266] linearizableReadLoop","detail":"{readStateIndex:384; appliedIndex:383; }","duration":"114.434143ms","start":"2023-11-27T23:31:37.278344Z","end":"2023-11-27T23:31:37.392778Z","steps":["trace[1370585266] 'read index received'  (duration: 790.262µs)","trace[1370585266] 'applied index is now lower than readState.Index'  (duration: 113.642954ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-27T23:31:37.393136Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.772298ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-606180\" ","response":"range_response_count:1 size:5743"}
	{"level":"info","ts":"2023-11-27T23:31:37.393171Z","caller":"traceutil/trace.go:171","msg":"trace[232118623] range","detail":"{range_begin:/registry/minions/addons-606180; range_end:; response_count:1; response_revision:373; }","duration":"114.840088ms","start":"2023-11-27T23:31:37.278321Z","end":"2023-11-27T23:31:37.393162Z","steps":["trace[232118623] 'agreement among raft nodes before linearized reading'  (duration: 114.674665ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T23:31:37.393559Z","caller":"traceutil/trace.go:171","msg":"trace[1790860593] transaction","detail":"{read_only:false; response_revision:373; number_of_response:1; }","duration":"115.296142ms","start":"2023-11-27T23:31:37.278233Z","end":"2023-11-27T23:31:37.393529Z","steps":["trace[1790860593] 'process raft request'  (duration: 114.396556ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T23:31:37.725652Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.869341ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128025441256248938 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/coredns\" mod_revision:231 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/coredns\" value_size:124 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/coredns\" > >>","response":"size:16"}
	{"level":"warn","ts":"2023-11-27T23:31:37.726432Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.967982ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-27T23:31:37.726477Z","caller":"traceutil/trace.go:171","msg":"trace[1079227380] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:0; response_revision:375; }","duration":"111.01544ms","start":"2023-11-27T23:31:37.615449Z","end":"2023-11-27T23:31:37.726465Z","steps":["trace[1079227380] 'agreement among raft nodes before linearized reading'  (duration: 110.952614ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T23:31:37.727244Z","caller":"traceutil/trace.go:171","msg":"trace[1902940342] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"372.668758ms","start":"2023-11-27T23:31:37.354565Z","end":"2023-11-27T23:31:37.727234Z","steps":["trace[1902940342] 'process raft request'  (duration: 259.012126ms)","trace[1902940342] 'compare'  (duration: 88.560377ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-27T23:31:37.727304Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-27T23:31:37.354548Z","time spent":"372.717636ms","remote":"127.0.0.1:56452","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":176,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/coredns\" mod_revision:231 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/coredns\" value_size:124 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/coredns\" > >"}
	
	* 
	* ==> gcp-auth [263e7fbb3c1648ce109c5be2f5d1036f392dbbb7dd2fc0a8a9c068b862fa774b] <==
	* 2023/11/27 23:32:51 GCP Auth Webhook started!
	2023/11/27 23:33:28 Ready to marshal response ...
	2023/11/27 23:33:28 Ready to write response ...
	2023/11/27 23:33:30 Ready to marshal response ...
	2023/11/27 23:33:30 Ready to write response ...
	2023/11/27 23:33:44 Ready to marshal response ...
	2023/11/27 23:33:44 Ready to write response ...
	2023/11/27 23:33:44 Ready to marshal response ...
	2023/11/27 23:33:44 Ready to write response ...
	2023/11/27 23:33:45 Ready to marshal response ...
	2023/11/27 23:33:45 Ready to write response ...
	2023/11/27 23:33:53 Ready to marshal response ...
	2023/11/27 23:33:53 Ready to write response ...
	2023/11/27 23:34:09 Ready to marshal response ...
	2023/11/27 23:34:09 Ready to write response ...
	2023/11/27 23:34:09 Ready to marshal response ...
	2023/11/27 23:34:09 Ready to write response ...
	2023/11/27 23:34:09 Ready to marshal response ...
	2023/11/27 23:34:09 Ready to write response ...
	2023/11/27 23:34:37 Ready to marshal response ...
	2023/11/27 23:34:37 Ready to write response ...
	2023/11/27 23:36:57 Ready to marshal response ...
	2023/11/27 23:36:57 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  23:37:24 up  6:19,  0 users,  load average: 0.18, 1.45, 2.77
	Linux addons-606180 5.15.0-1050-aws #55~20.04.1-Ubuntu SMP Mon Nov 6 12:18:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [4df224723a061c4322e74c4a6fe0c42b5f6bde92eff76afe7a08b5a4fdc29c87] <==
	* I1127 23:35:16.893781       1 main.go:227] handling current node
	I1127 23:35:26.906175       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:35:26.906200       1 main.go:227] handling current node
	I1127 23:35:36.910114       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:35:36.910141       1 main.go:227] handling current node
	I1127 23:35:46.924860       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:35:46.924890       1 main.go:227] handling current node
	I1127 23:35:56.937615       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:35:56.937641       1 main.go:227] handling current node
	I1127 23:36:06.941888       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:36:06.942066       1 main.go:227] handling current node
	I1127 23:36:16.950861       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:36:16.950889       1 main.go:227] handling current node
	I1127 23:36:26.963416       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:36:26.963445       1 main.go:227] handling current node
	I1127 23:36:36.967899       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:36:36.967924       1 main.go:227] handling current node
	I1127 23:36:46.979998       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:36:46.980027       1 main.go:227] handling current node
	I1127 23:36:56.992198       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:36:56.992230       1 main.go:227] handling current node
	I1127 23:37:07.002756       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:37:07.002789       1 main.go:227] handling current node
	I1127 23:37:17.007952       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:37:17.007982       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [4acde5ffc6f5e58cbd575d400616f38f441a2f39dcb5612e040722a56fef7614] <==
	* I1127 23:34:02.329596       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:34:02.329708       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 23:34:02.357603       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:34:02.357654       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 23:34:02.357913       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:34:02.357956       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 23:34:02.377213       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:34:02.377269       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 23:34:02.385443       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:34:02.385490       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 23:34:02.388245       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:34:02.388293       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1127 23:34:03.358296       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1127 23:34:03.387072       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1127 23:34:03.416606       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E1127 23:34:09.712352       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1127 23:34:09.718722       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.98.21"}
	I1127 23:34:18.113541       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1127 23:34:28.154817       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1127 23:34:31.276273       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1127 23:34:31.285667       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1127 23:34:32.303457       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1127 23:34:37.357394       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1127 23:34:37.774488       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.235.123"}
	I1127 23:36:58.177198       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.110.153"}
	
	* 
	* ==> kube-controller-manager [1190f5ab8649a19420911be3fdff74aa915e240d2918bb980cffb6f2bd6120a0] <==
	* E1127 23:36:02.047913       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1127 23:36:33.296674       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:36:33.296707       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1127 23:36:37.898037       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:36:37.898069       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1127 23:36:43.096908       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:36:43.096938       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1127 23:36:55.204301       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:36:55.204332       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1127 23:36:57.903655       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1127 23:36:57.939363       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-tkzmg"
	I1127 23:36:57.958146       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="54.238747ms"
	I1127 23:36:57.972519       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="14.224194ms"
	I1127 23:36:57.972756       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="31.245µs"
	I1127 23:37:00.829751       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="54.178µs"
	I1127 23:37:01.828175       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="72.303µs"
	I1127 23:37:02.828206       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="75.018µs"
	W1127 23:37:05.019723       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:37:05.019765       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1127 23:37:15.545552       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1127 23:37:15.553488       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1127 23:37:15.553771       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="6.843µs"
	I1127 23:37:18.874341       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="81.254µs"
	W1127 23:37:21.109933       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:37:21.109968       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [cf56f8af181f7667845cf630a0931a8b5ec982efd7680d6286c08f39bed711a0] <==
	* I1127 23:31:39.440052       1 server_others.go:69] "Using iptables proxy"
	I1127 23:31:39.614168       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1127 23:31:39.950738       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1127 23:31:39.953399       1 server_others.go:152] "Using iptables Proxier"
	I1127 23:31:39.953491       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1127 23:31:39.953524       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1127 23:31:39.953625       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1127 23:31:39.954034       1 server.go:846] "Version info" version="v1.28.4"
	I1127 23:31:39.954273       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1127 23:31:39.955144       1 config.go:188] "Starting service config controller"
	I1127 23:31:39.955222       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1127 23:31:39.955279       1 config.go:97] "Starting endpoint slice config controller"
	I1127 23:31:39.955306       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1127 23:31:39.955826       1 config.go:315] "Starting node config controller"
	I1127 23:31:39.955874       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1127 23:31:40.064335       1 shared_informer.go:318] Caches are synced for service config
	I1127 23:31:40.064472       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1127 23:31:40.060159       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [9eaa4515394b6211693689b3b71e75375a645c2dc8cf58ae0c34dffe92e8a53e] <==
	* I1127 23:31:18.679959       1 serving.go:348] Generated self-signed cert in-memory
	I1127 23:31:19.922475       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1127 23:31:19.922585       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1127 23:31:19.927334       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1127 23:31:19.927371       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1127 23:31:19.927415       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1127 23:31:19.927424       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1127 23:31:19.927440       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1127 23:31:19.927445       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1127 23:31:19.928235       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1127 23:31:19.928336       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1127 23:31:20.028531       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1127 23:31:20.028704       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1127 23:31:20.028756       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 27 23:37:22 addons-606180 kubelet[1349]: E1127 23:37:22.035564    1349 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/983d58f27a60f49aff0cc6461af8df37de1641464c8bae12503f3dc58627c946/diff" to get inode usage: stat /var/lib/containers/storage/overlay/983d58f27a60f49aff0cc6461af8df37de1641464c8bae12503f3dc58627c946/diff: no such file or directory, extraDiskErr: <nil>
	Nov 27 23:37:22 addons-606180 kubelet[1349]: E1127 23:37:22.048494    1349 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/cd6b3334d50c548868c2e3e90dca94a273f6cd9db0f24af88d4428a7bdb582b0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/cd6b3334d50c548868c2e3e90dca94a273f6cd9db0f24af88d4428a7bdb582b0/diff: no such file or directory, extraDiskErr: <nil>
	Nov 27 23:37:22 addons-606180 kubelet[1349]: E1127 23:37:22.051822    1349 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/46ec65e0776e7bc363cafe0009bf60a0e1287321c033e7a8c8085e9ff11360db/diff" to get inode usage: stat /var/lib/containers/storage/overlay/46ec65e0776e7bc363cafe0009bf60a0e1287321c033e7a8c8085e9ff11360db/diff: no such file or directory, extraDiskErr: <nil>
	Nov 27 23:37:22 addons-606180 kubelet[1349]: E1127 23:37:22.051828    1349 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0b5c5ef6c8a2a4d44e31438ff0c9fc804b311221af523799677b66c306fe4f6c/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0b5c5ef6c8a2a4d44e31438ff0c9fc804b311221af523799677b66c306fe4f6c/diff: no such file or directory, extraDiskErr: <nil>
	Nov 27 23:37:22 addons-606180 kubelet[1349]: E1127 23:37:22.052113    1349 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b7c7417de4512918a342f9320414203d0c8d3d1dee1b702f7c242ddeaddfeda6/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b7c7417de4512918a342f9320414203d0c8d3d1dee1b702f7c242ddeaddfeda6/diff: no such file or directory, extraDiskErr: <nil>
	Nov 27 23:37:22 addons-606180 kubelet[1349]: E1127 23:37:22.053345    1349 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/66fc76da52c9937408ceac280178a9036636d7fc6b794c5f4f1ad2d5b9a0915a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/66fc76da52c9937408ceac280178a9036636d7fc6b794c5f4f1ad2d5b9a0915a/diff: no such file or directory, extraDiskErr: <nil>
	Nov 27 23:37:22 addons-606180 kubelet[1349]: E1127 23:37:22.054440    1349 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/062a440cf8288f21863f0d145c9dfe9250e1ae7bc835c0d8b8c5b3eb1ccc106f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/062a440cf8288f21863f0d145c9dfe9250e1ae7bc835c0d8b8c5b3eb1ccc106f/diff: no such file or directory, extraDiskErr: <nil>
	Nov 27 23:37:22 addons-606180 kubelet[1349]: E1127 23:37:22.055601    1349 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/173518763d566f851d027db2191ec80fbfb12e6aa1b7b8e9b5bf31d93812ba50/diff" to get inode usage: stat /var/lib/containers/storage/overlay/173518763d566f851d027db2191ec80fbfb12e6aa1b7b8e9b5bf31d93812ba50/diff: no such file or directory, extraDiskErr: <nil>
	Nov 27 23:37:22 addons-606180 kubelet[1349]: E1127 23:37:22.055991    1349 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/08a07871762110f79042b28676d173b072d2400e421c4d4401a1676534b8f53e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/08a07871762110f79042b28676d173b072d2400e421c4d4401a1676534b8f53e/diff: no such file or directory, extraDiskErr: <nil>
	Nov 27 23:37:22 addons-606180 kubelet[1349]: E1127 23:37:22.063414    1349 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/6f7d1965dcfb459301b99777ad5d34128bbbfa67bd5624dbcdd2b3832082b86b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/6f7d1965dcfb459301b99777ad5d34128bbbfa67bd5624dbcdd2b3832082b86b/diff: no such file or directory, extraDiskErr: <nil>
	Nov 27 23:37:22 addons-606180 kubelet[1349]: E1127 23:37:22.064592    1349 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/6f7d1965dcfb459301b99777ad5d34128bbbfa67bd5624dbcdd2b3832082b86b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/6f7d1965dcfb459301b99777ad5d34128bbbfa67bd5624dbcdd2b3832082b86b/diff: no such file or directory, extraDiskErr: <nil>
	Nov 27 23:37:22 addons-606180 kubelet[1349]: E1127 23:37:22.067014    1349 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/96a3a4b8485e727a00442e4ab3e3b6d4a821e48a3be8f00473d0b849d523d285/diff" to get inode usage: stat /var/lib/containers/storage/overlay/96a3a4b8485e727a00442e4ab3e3b6d4a821e48a3be8f00473d0b849d523d285/diff: no such file or directory, extraDiskErr: <nil>
	Nov 27 23:37:22 addons-606180 kubelet[1349]: E1127 23:37:22.067025    1349 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b7c7417de4512918a342f9320414203d0c8d3d1dee1b702f7c242ddeaddfeda6/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b7c7417de4512918a342f9320414203d0c8d3d1dee1b702f7c242ddeaddfeda6/diff: no such file or directory, extraDiskErr: <nil>
	Nov 27 23:37:22 addons-606180 kubelet[1349]: E1127 23:37:22.068148    1349 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/8557cd473e4fbd6d5f45c3525aa9d9573def1929a65dac33bd2584c692b70647/diff" to get inode usage: stat /var/lib/containers/storage/overlay/8557cd473e4fbd6d5f45c3525aa9d9573def1929a65dac33bd2584c692b70647/diff: no such file or directory, extraDiskErr: <nil>
	Nov 27 23:37:22 addons-606180 kubelet[1349]: E1127 23:37:22.072428    1349 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c53604170879b279f433432098bd1df165d0c0272bb971c6943c1028e18c317a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c53604170879b279f433432098bd1df165d0c0272bb971c6943c1028e18c317a/diff: no such file or directory, extraDiskErr: <nil>
	Nov 27 23:37:22 addons-606180 kubelet[1349]: E1127 23:37:22.073540    1349 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c53604170879b279f433432098bd1df165d0c0272bb971c6943c1028e18c317a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c53604170879b279f433432098bd1df165d0c0272bb971c6943c1028e18c317a/diff: no such file or directory, extraDiskErr: <nil>
	Nov 27 23:37:22 addons-606180 kubelet[1349]: E1127 23:37:22.075757    1349 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0b5c5ef6c8a2a4d44e31438ff0c9fc804b311221af523799677b66c306fe4f6c/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0b5c5ef6c8a2a4d44e31438ff0c9fc804b311221af523799677b66c306fe4f6c/diff: no such file or directory, extraDiskErr: <nil>
	Nov 27 23:37:22 addons-606180 kubelet[1349]: E1127 23:37:22.076891    1349 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/8557cd473e4fbd6d5f45c3525aa9d9573def1929a65dac33bd2584c692b70647/diff" to get inode usage: stat /var/lib/containers/storage/overlay/8557cd473e4fbd6d5f45c3525aa9d9573def1929a65dac33bd2584c692b70647/diff: no such file or directory, extraDiskErr: <nil>
	Nov 27 23:37:22 addons-606180 kubelet[1349]: E1127 23:37:22.076910    1349 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/78b279568d93364e6d2c4f6ce90e6355176b7c84713920564a9d223438541d3e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/78b279568d93364e6d2c4f6ce90e6355176b7c84713920564a9d223438541d3e/diff: no such file or directory, extraDiskErr: <nil>
	Nov 27 23:37:22 addons-606180 kubelet[1349]: E1127 23:37:22.076979    1349 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/6d2e507e66ec03484e2ba6c8bdca85ae8e5944ac085bcaa4f34867057c323fea/diff" to get inode usage: stat /var/lib/containers/storage/overlay/6d2e507e66ec03484e2ba6c8bdca85ae8e5944ac085bcaa4f34867057c323fea/diff: no such file or directory, extraDiskErr: <nil>
	Nov 27 23:37:22 addons-606180 kubelet[1349]: E1127 23:37:22.078421    1349 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/062a440cf8288f21863f0d145c9dfe9250e1ae7bc835c0d8b8c5b3eb1ccc106f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/062a440cf8288f21863f0d145c9dfe9250e1ae7bc835c0d8b8c5b3eb1ccc106f/diff: no such file or directory, extraDiskErr: <nil>
	Nov 27 23:37:22 addons-606180 kubelet[1349]: E1127 23:37:22.080264    1349 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/96a3a4b8485e727a00442e4ab3e3b6d4a821e48a3be8f00473d0b849d523d285/diff" to get inode usage: stat /var/lib/containers/storage/overlay/96a3a4b8485e727a00442e4ab3e3b6d4a821e48a3be8f00473d0b849d523d285/diff: no such file or directory, extraDiskErr: <nil>
	Nov 27 23:37:22 addons-606180 kubelet[1349]: E1127 23:37:22.088951    1349 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/173518763d566f851d027db2191ec80fbfb12e6aa1b7b8e9b5bf31d93812ba50/diff" to get inode usage: stat /var/lib/containers/storage/overlay/173518763d566f851d027db2191ec80fbfb12e6aa1b7b8e9b5bf31d93812ba50/diff: no such file or directory, extraDiskErr: <nil>
	Nov 27 23:37:22 addons-606180 kubelet[1349]: I1127 23:37:22.265273    1349 scope.go:117] "RemoveContainer" containerID="d52d358cd36c9101d980006629b169f8bc9b0f86d87fb64037a6788a2f5a717d"
	Nov 27 23:37:22 addons-606180 kubelet[1349]: I1127 23:37:22.287112    1349 scope.go:117] "RemoveContainer" containerID="8a6cded543ff95c1ac743ef97b9893d52a90a4e750989a938fec7f3c817e2201"
	
	* 
	* ==> storage-provisioner [c375368fdb7aea16064ef078c95b3cf4ca9ebccdcc89ea09139d2270b3c7d542] <==
	* I1127 23:32:07.839661       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1127 23:32:08.033445       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1127 23:32:08.033547       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1127 23:32:08.199906       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1127 23:32:08.220057       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-606180_6d890f52-e172-4aa8-9c21-eda4349eab97!
	I1127 23:32:08.222039       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6a945d28-55d8-4cd1-b410-1bbcf948bd9f", APIVersion:"v1", ResourceVersion:"821", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-606180_6d890f52-e172-4aa8-9c21-eda4349eab97 became leader
	I1127 23:32:08.321288       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-606180_6d890f52-e172-4aa8-9c21-eda4349eab97!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-606180 -n addons-606180
helpers_test.go:261: (dbg) Run:  kubectl --context addons-606180 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (168.81s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (181.13s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-684553 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-684553 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.016442102s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-684553 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-684553 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5d482862-503a-489f-b97f-1644c02fd1dd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5d482862-503a-489f-b97f-1644c02fd1dd] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.013144259s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-684553 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1127 23:46:33.163552 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
E1127 23:46:33.169446 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
E1127 23:46:33.179750 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
E1127 23:46:33.199988 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
E1127 23:46:33.240320 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
E1127 23:46:33.320612 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
E1127 23:46:33.481119 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
E1127 23:46:33.801716 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
E1127 23:46:34.442634 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
E1127 23:46:35.723099 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
E1127 23:46:38.283356 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
E1127 23:46:43.403841 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
E1127 23:46:53.644814 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-684553 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.940859479s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-684553 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-684553 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.026540163s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-684553 addons disable ingress-dns --alsologtostderr -v=1
E1127 23:47:14.125042 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-684553 addons disable ingress-dns --alsologtostderr -v=1: (3.004079823s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-684553 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-684553 addons disable ingress --alsologtostderr -v=1: (7.62830961s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-684553
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-684553:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4e0bd9abf051ce542aa45fac0897b712226062f470cd4d7048f129dbb182029b",
	        "Created": "2023-11-27T23:43:03.540820233Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1489514,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-27T23:43:03.912662404Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6b2707af759835616662ab4511fa7cfc968ed5500b8f30a5d231d9af64582310",
	        "ResolvConfPath": "/var/lib/docker/containers/4e0bd9abf051ce542aa45fac0897b712226062f470cd4d7048f129dbb182029b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4e0bd9abf051ce542aa45fac0897b712226062f470cd4d7048f129dbb182029b/hostname",
	        "HostsPath": "/var/lib/docker/containers/4e0bd9abf051ce542aa45fac0897b712226062f470cd4d7048f129dbb182029b/hosts",
	        "LogPath": "/var/lib/docker/containers/4e0bd9abf051ce542aa45fac0897b712226062f470cd4d7048f129dbb182029b/4e0bd9abf051ce542aa45fac0897b712226062f470cd4d7048f129dbb182029b-json.log",
	        "Name": "/ingress-addon-legacy-684553",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-684553:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-684553",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b5899067d0e56d905df23637604668f6f5a552bc8dce7e2729c82814c4bddff0-init/diff:/var/lib/docker/overlay2/66e18f6b92e8847ad9065a2bde54888b27c493e8cb472385d095e2aee2f57672/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5899067d0e56d905df23637604668f6f5a552bc8dce7e2729c82814c4bddff0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5899067d0e56d905df23637604668f6f5a552bc8dce7e2729c82814c4bddff0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5899067d0e56d905df23637604668f6f5a552bc8dce7e2729c82814c4bddff0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-684553",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-684553/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-684553",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-684553",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-684553",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b9ed534d78aee934dac4fc047d4699a0b8e8a26c133569103a6fe6411e375b00",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34084"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34083"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34080"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34082"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34081"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b9ed534d78ae",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-684553": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4e0bd9abf051",
	                        "ingress-addon-legacy-684553"
	                    ],
	                    "NetworkID": "0eac83cfa917b1750820512904c004ebd4e1bde58091658402d7e0a5b17c020d",
	                    "EndpointID": "9acc3e404446ee595709fc2aa02a2e37dd2ddf9e89f793201519e55a40c2ad27",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-684553 -n ingress-addon-legacy-684553
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-684553 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-684553 logs -n 25: (1.394415435s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-428453                                                      | functional-428453           | jenkins | v1.32.0 | 27 Nov 23 23:42 UTC | 27 Nov 23 23:42 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-428453 image ls                                             | functional-428453           | jenkins | v1.32.0 | 27 Nov 23 23:42 UTC | 27 Nov 23 23:42 UTC |
	| image          | functional-428453 image load --daemon                                  | functional-428453           | jenkins | v1.32.0 | 27 Nov 23 23:42 UTC | 27 Nov 23 23:42 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-428453               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-428453 image ls                                             | functional-428453           | jenkins | v1.32.0 | 27 Nov 23 23:42 UTC | 27 Nov 23 23:42 UTC |
	| image          | functional-428453 image save                                           | functional-428453           | jenkins | v1.32.0 | 27 Nov 23 23:42 UTC | 27 Nov 23 23:42 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-428453               |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-428453 image rm                                             | functional-428453           | jenkins | v1.32.0 | 27 Nov 23 23:42 UTC | 27 Nov 23 23:42 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-428453               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-428453 image ls                                             | functional-428453           | jenkins | v1.32.0 | 27 Nov 23 23:42 UTC | 27 Nov 23 23:42 UTC |
	| image          | functional-428453 image load                                           | functional-428453           | jenkins | v1.32.0 | 27 Nov 23 23:42 UTC | 27 Nov 23 23:42 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-428453 image ls                                             | functional-428453           | jenkins | v1.32.0 | 27 Nov 23 23:42 UTC | 27 Nov 23 23:42 UTC |
	| image          | functional-428453 image save --daemon                                  | functional-428453           | jenkins | v1.32.0 | 27 Nov 23 23:42 UTC | 27 Nov 23 23:42 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-428453               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-428453                                                      | functional-428453           | jenkins | v1.32.0 | 27 Nov 23 23:42 UTC | 27 Nov 23 23:42 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-428453                                                      | functional-428453           | jenkins | v1.32.0 | 27 Nov 23 23:42 UTC | 27 Nov 23 23:42 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-428453 ssh pgrep                                            | functional-428453           | jenkins | v1.32.0 | 27 Nov 23 23:42 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-428453                                                      | functional-428453           | jenkins | v1.32.0 | 27 Nov 23 23:42 UTC | 27 Nov 23 23:42 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-428453                                                      | functional-428453           | jenkins | v1.32.0 | 27 Nov 23 23:42 UTC | 27 Nov 23 23:42 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-428453 image build -t                                       | functional-428453           | jenkins | v1.32.0 | 27 Nov 23 23:42 UTC | 27 Nov 23 23:42 UTC |
	|                | localhost/my-image:functional-428453                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-428453 image ls                                             | functional-428453           | jenkins | v1.32.0 | 27 Nov 23 23:42 UTC | 27 Nov 23 23:42 UTC |
	| delete         | -p functional-428453                                                   | functional-428453           | jenkins | v1.32.0 | 27 Nov 23 23:42 UTC | 27 Nov 23 23:42 UTC |
	| start          | -p ingress-addon-legacy-684553                                         | ingress-addon-legacy-684553 | jenkins | v1.32.0 | 27 Nov 23 23:42 UTC | 27 Nov 23 23:44 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-684553                                            | ingress-addon-legacy-684553 | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC | 27 Nov 23 23:44 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-684553                                            | ingress-addon-legacy-684553 | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC | 27 Nov 23 23:44 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-684553                                            | ingress-addon-legacy-684553 | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-684553 ip                                         | ingress-addon-legacy-684553 | jenkins | v1.32.0 | 27 Nov 23 23:46 UTC | 27 Nov 23 23:46 UTC |
	| addons         | ingress-addon-legacy-684553                                            | ingress-addon-legacy-684553 | jenkins | v1.32.0 | 27 Nov 23 23:47 UTC | 27 Nov 23 23:47 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-684553                                            | ingress-addon-legacy-684553 | jenkins | v1.32.0 | 27 Nov 23 23:47 UTC | 27 Nov 23 23:47 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 23:42:41
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 23:42:41.885340 1489046 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:42:41.885552 1489046 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:42:41.885579 1489046 out.go:309] Setting ErrFile to fd 2...
	I1127 23:42:41.885600 1489046 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:42:41.885921 1489046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-1455288/.minikube/bin
	I1127 23:42:41.886392 1489046 out.go:303] Setting JSON to false
	I1127 23:42:41.887420 1489046 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23111,"bootTime":1701105451,"procs":258,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1127 23:42:41.887517 1489046 start.go:138] virtualization:  
	I1127 23:42:41.890063 1489046 out.go:177] * [ingress-addon-legacy-684553] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1127 23:42:41.892320 1489046 out.go:177]   - MINIKUBE_LOCATION=17206
	I1127 23:42:41.894167 1489046 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:42:41.892457 1489046 notify.go:220] Checking for updates...
	I1127 23:42:41.896059 1489046 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-1455288/kubeconfig
	I1127 23:42:41.898315 1489046 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube
	I1127 23:42:41.900124 1489046 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1127 23:42:41.901702 1489046 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 23:42:41.903862 1489046 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:42:41.930303 1489046 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:42:41.930439 1489046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:42:42.022286 1489046 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:36 SystemTime:2023-11-27 23:42:42.004641613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:42:42.022408 1489046 docker.go:295] overlay module found
	I1127 23:42:42.024754 1489046 out.go:177] * Using the docker driver based on user configuration
	I1127 23:42:42.026440 1489046 start.go:298] selected driver: docker
	I1127 23:42:42.026470 1489046 start.go:902] validating driver "docker" against <nil>
	I1127 23:42:42.026486 1489046 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 23:42:42.027149 1489046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:42:42.147451 1489046 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:36 SystemTime:2023-11-27 23:42:42.116635991 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:42:42.147664 1489046 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1127 23:42:42.147953 1489046 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1127 23:42:42.149991 1489046 out.go:177] * Using Docker driver with root privileges
	I1127 23:42:42.152031 1489046 cni.go:84] Creating CNI manager for ""
	I1127 23:42:42.152065 1489046 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 23:42:42.152078 1489046 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1127 23:42:42.152093 1489046 start_flags.go:323] config:
	{Name:ingress-addon-legacy-684553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-684553 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:42:42.154332 1489046 out.go:177] * Starting control plane node ingress-addon-legacy-684553 in cluster ingress-addon-legacy-684553
	I1127 23:42:42.156226 1489046 cache.go:121] Beginning downloading kic base image for docker with crio
	I1127 23:42:42.158107 1489046 out.go:177] * Pulling base image ...
	I1127 23:42:42.159832 1489046 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1127 23:42:42.160388 1489046 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 23:42:42.185876 1489046 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1127 23:42:42.185907 1489046 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1127 23:42:42.226089 1489046 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1127 23:42:42.226142 1489046 cache.go:56] Caching tarball of preloaded images
	I1127 23:42:42.226343 1489046 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1127 23:42:42.228661 1489046 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1127 23:42:42.230760 1489046 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1127 23:42:42.351097 1489046 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1127 23:42:55.468603 1489046 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1127 23:42:55.468712 1489046 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1127 23:42:56.655477 1489046 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1127 23:42:56.655886 1489046 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/config.json ...
	I1127 23:42:56.655922 1489046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/config.json: {Name:mk2f6fd407d4077fd7d986a17345f00166578104 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:42:56.656117 1489046 cache.go:194] Successfully downloaded all kic artifacts
	I1127 23:42:56.656164 1489046 start.go:365] acquiring machines lock for ingress-addon-legacy-684553: {Name:mkbe07c50c26969b7237f666e31eadfa763e9014 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:42:56.656223 1489046 start.go:369] acquired machines lock for "ingress-addon-legacy-684553" in 47.425µs
	I1127 23:42:56.656247 1489046 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-684553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-684553 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1127 23:42:56.656323 1489046 start.go:125] createHost starting for "" (driver="docker")
	I1127 23:42:56.658465 1489046 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1127 23:42:56.658699 1489046 start.go:159] libmachine.API.Create for "ingress-addon-legacy-684553" (driver="docker")
	I1127 23:42:56.658741 1489046 client.go:168] LocalClient.Create starting
	I1127 23:42:56.658819 1489046 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem
	I1127 23:42:56.658860 1489046 main.go:141] libmachine: Decoding PEM data...
	I1127 23:42:56.658875 1489046 main.go:141] libmachine: Parsing certificate...
	I1127 23:42:56.658927 1489046 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/cert.pem
	I1127 23:42:56.658951 1489046 main.go:141] libmachine: Decoding PEM data...
	I1127 23:42:56.658963 1489046 main.go:141] libmachine: Parsing certificate...
	I1127 23:42:56.659331 1489046 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-684553 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1127 23:42:56.677387 1489046 cli_runner.go:211] docker network inspect ingress-addon-legacy-684553 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1127 23:42:56.677481 1489046 network_create.go:281] running [docker network inspect ingress-addon-legacy-684553] to gather additional debugging logs...
	I1127 23:42:56.677502 1489046 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-684553
	W1127 23:42:56.696395 1489046 cli_runner.go:211] docker network inspect ingress-addon-legacy-684553 returned with exit code 1
	I1127 23:42:56.696439 1489046 network_create.go:284] error running [docker network inspect ingress-addon-legacy-684553]: docker network inspect ingress-addon-legacy-684553: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-684553 not found
	I1127 23:42:56.696454 1489046 network_create.go:286] output of [docker network inspect ingress-addon-legacy-684553]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-684553 not found
	
	** /stderr **
	I1127 23:42:56.696585 1489046 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 23:42:56.713900 1489046 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400000e240}
	I1127 23:42:56.713943 1489046 network_create.go:124] attempt to create docker network ingress-addon-legacy-684553 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1127 23:42:56.714009 1489046 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-684553 ingress-addon-legacy-684553
	I1127 23:42:56.781749 1489046 network_create.go:108] docker network ingress-addon-legacy-684553 192.168.49.0/24 created
	I1127 23:42:56.781780 1489046 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-684553" container
	I1127 23:42:56.781974 1489046 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1127 23:42:56.798875 1489046 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-684553 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-684553 --label created_by.minikube.sigs.k8s.io=true
	I1127 23:42:56.820563 1489046 oci.go:103] Successfully created a docker volume ingress-addon-legacy-684553
	I1127 23:42:56.820654 1489046 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-684553-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-684553 --entrypoint /usr/bin/test -v ingress-addon-legacy-684553:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1127 23:42:58.321954 1489046 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-684553-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-684553 --entrypoint /usr/bin/test -v ingress-addon-legacy-684553:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib: (1.50125592s)
	I1127 23:42:58.321996 1489046 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-684553
	I1127 23:42:58.322039 1489046 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1127 23:42:58.322067 1489046 kic.go:194] Starting extracting preloaded images to volume ...
	I1127 23:42:58.322170 1489046 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-684553:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1127 23:43:03.456536 1489046 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-684553:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir: (5.134317786s)
	I1127 23:43:03.456572 1489046 kic.go:203] duration metric: took 5.134502 seconds to extract preloaded images to volume
	W1127 23:43:03.456722 1489046 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1127 23:43:03.456834 1489046 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1127 23:43:03.525188 1489046 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-684553 --name ingress-addon-legacy-684553 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-684553 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-684553 --network ingress-addon-legacy-684553 --ip 192.168.49.2 --volume ingress-addon-legacy-684553:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1127 23:43:03.919999 1489046 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-684553 --format={{.State.Running}}
	I1127 23:43:03.941239 1489046 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-684553 --format={{.State.Status}}
	I1127 23:43:03.970590 1489046 cli_runner.go:164] Run: docker exec ingress-addon-legacy-684553 stat /var/lib/dpkg/alternatives/iptables
	I1127 23:43:04.037242 1489046 oci.go:144] the created container "ingress-addon-legacy-684553" has a running status.
	I1127 23:43:04.037273 1489046 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/ingress-addon-legacy-684553/id_rsa...
	I1127 23:43:04.524539 1489046 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/ingress-addon-legacy-684553/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1127 23:43:04.524642 1489046 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/ingress-addon-legacy-684553/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1127 23:43:04.559231 1489046 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-684553 --format={{.State.Status}}
	I1127 23:43:04.592838 1489046 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1127 23:43:04.592859 1489046 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-684553 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1127 23:43:04.674506 1489046 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-684553 --format={{.State.Status}}
	I1127 23:43:04.707989 1489046 machine.go:88] provisioning docker machine ...
	I1127 23:43:04.708019 1489046 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-684553"
	I1127 23:43:04.708090 1489046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-684553
	I1127 23:43:04.739153 1489046 main.go:141] libmachine: Using SSH client type: native
	I1127 23:43:04.739577 1489046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34084 <nil> <nil>}
	I1127 23:43:04.739599 1489046 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-684553 && echo "ingress-addon-legacy-684553" | sudo tee /etc/hostname
	I1127 23:43:04.975047 1489046 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-684553
	
	I1127 23:43:04.975203 1489046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-684553
	I1127 23:43:05.005779 1489046 main.go:141] libmachine: Using SSH client type: native
	I1127 23:43:05.006259 1489046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34084 <nil> <nil>}
	I1127 23:43:05.006281 1489046 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-684553' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-684553/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-684553' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1127 23:43:05.147569 1489046 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 23:43:05.147648 1489046 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17206-1455288/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-1455288/.minikube}
	I1127 23:43:05.147686 1489046 ubuntu.go:177] setting up certificates
	I1127 23:43:05.147717 1489046 provision.go:83] configureAuth start
	I1127 23:43:05.147817 1489046 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-684553
	I1127 23:43:05.172041 1489046 provision.go:138] copyHostCerts
	I1127 23:43:05.172086 1489046 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.pem
	I1127 23:43:05.172118 1489046 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.pem, removing ...
	I1127 23:43:05.172130 1489046 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.pem
	I1127 23:43:05.172206 1489046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.pem (1078 bytes)
	I1127 23:43:05.172285 1489046 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17206-1455288/.minikube/cert.pem
	I1127 23:43:05.172309 1489046 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-1455288/.minikube/cert.pem, removing ...
	I1127 23:43:05.172316 1489046 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-1455288/.minikube/cert.pem
	I1127 23:43:05.172343 1489046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-1455288/.minikube/cert.pem (1123 bytes)
	I1127 23:43:05.172394 1489046 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17206-1455288/.minikube/key.pem
	I1127 23:43:05.172416 1489046 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-1455288/.minikube/key.pem, removing ...
	I1127 23:43:05.172426 1489046 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-1455288/.minikube/key.pem
	I1127 23:43:05.172456 1489046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-1455288/.minikube/key.pem (1679 bytes)
	I1127 23:43:05.172511 1489046 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-684553 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-684553]
	I1127 23:43:05.639481 1489046 provision.go:172] copyRemoteCerts
	I1127 23:43:05.639550 1489046 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1127 23:43:05.639602 1489046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-684553
	I1127 23:43:05.664136 1489046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34084 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/ingress-addon-legacy-684553/id_rsa Username:docker}
	I1127 23:43:05.761389 1489046 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1127 23:43:05.761450 1489046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1127 23:43:05.790614 1489046 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1127 23:43:05.790678 1489046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1127 23:43:05.820329 1489046 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1127 23:43:05.820389 1489046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1127 23:43:05.849446 1489046 provision.go:86] duration metric: configureAuth took 701.69878ms
	I1127 23:43:05.849474 1489046 ubuntu.go:193] setting minikube options for container-runtime
	I1127 23:43:05.849672 1489046 config.go:182] Loaded profile config "ingress-addon-legacy-684553": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1127 23:43:05.849780 1489046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-684553
	I1127 23:43:05.868119 1489046 main.go:141] libmachine: Using SSH client type: native
	I1127 23:43:05.868536 1489046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34084 <nil> <nil>}
	I1127 23:43:05.868554 1489046 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1127 23:43:06.172192 1489046 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1127 23:43:06.172276 1489046 machine.go:91] provisioned docker machine in 1.464266182s
	I1127 23:43:06.172306 1489046 client.go:171] LocalClient.Create took 9.513553793s
	I1127 23:43:06.172341 1489046 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-684553" took 9.51364139s
	I1127 23:43:06.172362 1489046 start.go:300] post-start starting for "ingress-addon-legacy-684553" (driver="docker")
	I1127 23:43:06.172384 1489046 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1127 23:43:06.172480 1489046 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1127 23:43:06.172548 1489046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-684553
	I1127 23:43:06.190323 1489046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34084 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/ingress-addon-legacy-684553/id_rsa Username:docker}
	I1127 23:43:06.285029 1489046 ssh_runner.go:195] Run: cat /etc/os-release
	I1127 23:43:06.289799 1489046 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1127 23:43:06.289833 1489046 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1127 23:43:06.289845 1489046 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1127 23:43:06.289898 1489046 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1127 23:43:06.289913 1489046 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-1455288/.minikube/addons for local assets ...
	I1127 23:43:06.289973 1489046 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-1455288/.minikube/files for local assets ...
	I1127 23:43:06.290067 1489046 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/14606522.pem -> 14606522.pem in /etc/ssl/certs
	I1127 23:43:06.290080 1489046 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/14606522.pem -> /etc/ssl/certs/14606522.pem
	I1127 23:43:06.290193 1489046 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1127 23:43:06.301088 1489046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/14606522.pem --> /etc/ssl/certs/14606522.pem (1708 bytes)
	I1127 23:43:06.328910 1489046 start.go:303] post-start completed in 156.519615ms
	I1127 23:43:06.329285 1489046 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-684553
	I1127 23:43:06.347115 1489046 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/config.json ...
	I1127 23:43:06.347391 1489046 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 23:43:06.347441 1489046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-684553
	I1127 23:43:06.367849 1489046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34084 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/ingress-addon-legacy-684553/id_rsa Username:docker}
	I1127 23:43:06.460075 1489046 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1127 23:43:06.465797 1489046 start.go:128] duration metric: createHost completed in 9.809457674s
	I1127 23:43:06.465821 1489046 start.go:83] releasing machines lock for "ingress-addon-legacy-684553", held for 9.809585319s
	I1127 23:43:06.465911 1489046 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-684553
	I1127 23:43:06.483882 1489046 ssh_runner.go:195] Run: cat /version.json
	I1127 23:43:06.483894 1489046 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1127 23:43:06.483935 1489046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-684553
	I1127 23:43:06.483958 1489046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-684553
	I1127 23:43:06.503842 1489046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34084 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/ingress-addon-legacy-684553/id_rsa Username:docker}
	I1127 23:43:06.504227 1489046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34084 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/ingress-addon-legacy-684553/id_rsa Username:docker}
	I1127 23:43:06.746053 1489046 ssh_runner.go:195] Run: systemctl --version
	I1127 23:43:06.751577 1489046 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1127 23:43:06.899533 1489046 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1127 23:43:06.905385 1489046 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 23:43:06.930311 1489046 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1127 23:43:06.930395 1489046 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 23:43:06.970879 1489046 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1127 23:43:06.970915 1489046 start.go:472] detecting cgroup driver to use...
	I1127 23:43:06.970965 1489046 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1127 23:43:06.971033 1489046 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1127 23:43:06.990688 1489046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1127 23:43:07.005803 1489046 docker.go:203] disabling cri-docker service (if available) ...
	I1127 23:43:07.005910 1489046 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1127 23:43:07.022943 1489046 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1127 23:43:07.040609 1489046 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1127 23:43:07.143230 1489046 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1127 23:43:07.255213 1489046 docker.go:219] disabling docker service ...
	I1127 23:43:07.255296 1489046 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1127 23:43:07.278183 1489046 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1127 23:43:07.292341 1489046 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1127 23:43:07.390269 1489046 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1127 23:43:07.496689 1489046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1127 23:43:07.510428 1489046 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 23:43:07.531211 1489046 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1127 23:43:07.531281 1489046 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:43:07.543360 1489046 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1127 23:43:07.543430 1489046 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:43:07.556309 1489046 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:43:07.569145 1489046 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:43:07.581211 1489046 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1127 23:43:07.592802 1489046 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1127 23:43:07.603439 1489046 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1127 23:43:07.614227 1489046 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 23:43:07.702240 1489046 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1127 23:43:07.825086 1489046 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1127 23:43:07.825198 1489046 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1127 23:43:07.829951 1489046 start.go:540] Will wait 60s for crictl version
	I1127 23:43:07.830062 1489046 ssh_runner.go:195] Run: which crictl
	I1127 23:43:07.834675 1489046 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1127 23:43:07.878128 1489046 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1127 23:43:07.878226 1489046 ssh_runner.go:195] Run: crio --version
	I1127 23:43:07.925985 1489046 ssh_runner.go:195] Run: crio --version
	I1127 23:43:07.972812 1489046 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1127 23:43:07.974450 1489046 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-684553 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 23:43:07.991539 1489046 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1127 23:43:07.996039 1489046 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:43:08.012936 1489046 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1127 23:43:08.013004 1489046 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 23:43:08.064715 1489046 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1127 23:43:08.064789 1489046 ssh_runner.go:195] Run: which lz4
	I1127 23:43:08.069527 1489046 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1127 23:43:08.069632 1489046 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1127 23:43:08.074384 1489046 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1127 23:43:08.074420 1489046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I1127 23:43:10.308999 1489046 crio.go:444] Took 2.239407 seconds to copy over tarball
	I1127 23:43:10.309116 1489046 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1127 23:43:12.942150 1489046 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.632987851s)
	I1127 23:43:12.942191 1489046 crio.go:451] Took 2.633109 seconds to extract the tarball
	I1127 23:43:12.942203 1489046 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1127 23:43:13.134339 1489046 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 23:43:13.178183 1489046 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1127 23:43:13.178209 1489046 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1127 23:43:13.178278 1489046 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:43:13.178329 1489046 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1127 23:43:13.178517 1489046 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1127 23:43:13.178536 1489046 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1127 23:43:13.178592 1489046 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1127 23:43:13.178618 1489046 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1127 23:43:13.178655 1489046 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1127 23:43:13.178517 1489046 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1127 23:43:13.180460 1489046 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1127 23:43:13.180520 1489046 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1127 23:43:13.180565 1489046 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1127 23:43:13.180706 1489046 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1127 23:43:13.180473 1489046 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1127 23:43:13.180823 1489046 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1127 23:43:13.180864 1489046 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:43:13.181106 1489046 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	W1127 23:43:13.528157 1489046 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	W1127 23:43:13.528256 1489046 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1127 23:43:13.528335 1489046 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1127 23:43:13.528718 1489046 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W1127 23:43:13.534526 1489046 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1127 23:43:13.534713 1489046 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1127 23:43:13.538555 1489046 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1127 23:43:13.556604 1489046 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1127 23:43:13.556825 1489046 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W1127 23:43:13.561515 1489046 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1127 23:43:13.561758 1489046 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W1127 23:43:13.565012 1489046 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1127 23:43:13.565202 1489046 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1127 23:43:13.700359 1489046 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1127 23:43:13.700423 1489046 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1127 23:43:13.700483 1489046 ssh_runner.go:195] Run: which crictl
	I1127 23:43:13.700555 1489046 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1127 23:43:13.700572 1489046 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1127 23:43:13.700592 1489046 ssh_runner.go:195] Run: which crictl
	W1127 23:43:13.738797 1489046 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1127 23:43:13.738967 1489046 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:43:13.763386 1489046 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1127 23:43:13.763559 1489046 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1127 23:43:13.763632 1489046 ssh_runner.go:195] Run: which crictl
	I1127 23:43:13.763491 1489046 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1127 23:43:13.763742 1489046 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1127 23:43:13.763787 1489046 ssh_runner.go:195] Run: which crictl
	I1127 23:43:13.772087 1489046 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1127 23:43:13.772133 1489046 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1127 23:43:13.772182 1489046 ssh_runner.go:195] Run: which crictl
	I1127 23:43:13.772275 1489046 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1127 23:43:13.772299 1489046 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1127 23:43:13.772327 1489046 ssh_runner.go:195] Run: which crictl
	I1127 23:43:13.772393 1489046 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1127 23:43:13.772411 1489046 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1127 23:43:13.772431 1489046 ssh_runner.go:195] Run: which crictl
	I1127 23:43:13.772495 1489046 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1127 23:43:13.772570 1489046 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1127 23:43:13.933976 1489046 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1127 23:43:13.934025 1489046 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:43:13.934077 1489046 ssh_runner.go:195] Run: which crictl
	I1127 23:43:13.934091 1489046 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1127 23:43:13.934159 1489046 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1127 23:43:13.934232 1489046 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1127 23:43:13.934267 1489046 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1127 23:43:13.934287 1489046 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1127 23:43:13.934410 1489046 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1127 23:43:13.934460 1489046 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1127 23:43:14.074937 1489046 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1127 23:43:14.075015 1489046 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:43:14.075093 1489046 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1127 23:43:14.075129 1489046 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1127 23:43:14.075166 1489046 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1127 23:43:14.075204 1489046 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1127 23:43:14.136884 1489046 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1127 23:43:14.137005 1489046 cache_images.go:92] LoadImages completed in 958.77985ms
	W1127 23:43:14.137096 1489046 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	I1127 23:43:14.137303 1489046 ssh_runner.go:195] Run: crio config
	I1127 23:43:14.199358 1489046 cni.go:84] Creating CNI manager for ""
	I1127 23:43:14.199385 1489046 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 23:43:14.199438 1489046 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1127 23:43:14.199465 1489046 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-684553 NodeName:ingress-addon-legacy-684553 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1127 23:43:14.199656 1489046 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-684553"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1127 23:43:14.199813 1489046 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-684553 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-684553 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1127 23:43:14.199938 1489046 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1127 23:43:14.210449 1489046 binaries.go:44] Found k8s binaries, skipping transfer
	I1127 23:43:14.210523 1489046 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1127 23:43:14.221270 1489046 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1127 23:43:14.242434 1489046 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1127 23:43:14.263350 1489046 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1127 23:43:14.284508 1489046 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1127 23:43:14.288858 1489046 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:43:14.301694 1489046 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553 for IP: 192.168.49.2
	I1127 23:43:14.301768 1489046 certs.go:190] acquiring lock for shared ca certs: {Name:mk268ef230412b241734813f303d69d9b36c42ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:43:14.301944 1489046 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.key
	I1127 23:43:14.302009 1489046 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-1455288/.minikube/proxy-client-ca.key
	I1127 23:43:14.302080 1489046 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.key
	I1127 23:43:14.302096 1489046 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt with IP's: []
	I1127 23:43:14.738217 1489046 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt ...
	I1127 23:43:14.738248 1489046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: {Name:mkd663252813d4ed5243b5c8473056f0b4163a58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:43:14.738462 1489046 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.key ...
	I1127 23:43:14.738485 1489046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.key: {Name:mk177030d160329cfcf011c7f7ef4a3a53356450 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:43:14.738581 1489046 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/apiserver.key.dd3b5fb2
	I1127 23:43:14.738598 1489046 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1127 23:43:15.357027 1489046 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/apiserver.crt.dd3b5fb2 ...
	I1127 23:43:15.357060 1489046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/apiserver.crt.dd3b5fb2: {Name:mk1365f3a14e334200f4cc36e5e3c48533847c8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:43:15.357251 1489046 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/apiserver.key.dd3b5fb2 ...
	I1127 23:43:15.357266 1489046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/apiserver.key.dd3b5fb2: {Name:mkbde472b73f59e46e26c3979ba117220c0aeadb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:43:15.357350 1489046 certs.go:337] copying /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/apiserver.crt
	I1127 23:43:15.357432 1489046 certs.go:341] copying /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/apiserver.key
	I1127 23:43:15.357492 1489046 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/proxy-client.key
	I1127 23:43:15.357509 1489046 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/proxy-client.crt with IP's: []
	I1127 23:43:16.146786 1489046 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/proxy-client.crt ...
	I1127 23:43:16.146817 1489046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/proxy-client.crt: {Name:mk381891f40b06ba3425677cae2c2c83704c9661 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:43:16.146999 1489046 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/proxy-client.key ...
	I1127 23:43:16.147014 1489046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/proxy-client.key: {Name:mka864791c35a1b6908ad9fb6d7b642fc6091c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:43:16.147099 1489046 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1127 23:43:16.147121 1489046 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1127 23:43:16.147133 1489046 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1127 23:43:16.147150 1489046 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1127 23:43:16.147164 1489046 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1127 23:43:16.147240 1489046 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1127 23:43:16.147258 1489046 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1127 23:43:16.147270 1489046 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1127 23:43:16.147326 1489046 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/1460652.pem (1338 bytes)
	W1127 23:43:16.147365 1489046 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/1460652_empty.pem, impossibly tiny 0 bytes
	I1127 23:43:16.147382 1489046 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca-key.pem (1679 bytes)
	I1127 23:43:16.147416 1489046 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem (1078 bytes)
	I1127 23:43:16.147448 1489046 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/cert.pem (1123 bytes)
	I1127 23:43:16.147506 1489046 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/key.pem (1679 bytes)
	I1127 23:43:16.147556 1489046 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/14606522.pem (1708 bytes)
	I1127 23:43:16.147587 1489046 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/14606522.pem -> /usr/share/ca-certificates/14606522.pem
	I1127 23:43:16.147605 1489046 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:43:16.147619 1489046 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/1460652.pem -> /usr/share/ca-certificates/1460652.pem
	I1127 23:43:16.148177 1489046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1127 23:43:16.177108 1489046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1127 23:43:16.205709 1489046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1127 23:43:16.233944 1489046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1127 23:43:16.262620 1489046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1127 23:43:16.291626 1489046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1127 23:43:16.319353 1489046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1127 23:43:16.347188 1489046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1127 23:43:16.375003 1489046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/14606522.pem --> /usr/share/ca-certificates/14606522.pem (1708 bytes)
	I1127 23:43:16.403893 1489046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1127 23:43:16.431542 1489046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/1460652.pem --> /usr/share/ca-certificates/1460652.pem (1338 bytes)
	I1127 23:43:16.459058 1489046 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1127 23:43:16.480009 1489046 ssh_runner.go:195] Run: openssl version
	I1127 23:43:16.486908 1489046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1460652.pem && ln -fs /usr/share/ca-certificates/1460652.pem /etc/ssl/certs/1460652.pem"
	I1127 23:43:16.498007 1489046 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1460652.pem
	I1127 23:43:16.502628 1489046 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:38 /usr/share/ca-certificates/1460652.pem
	I1127 23:43:16.502749 1489046 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1460652.pem
	I1127 23:43:16.511310 1489046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1460652.pem /etc/ssl/certs/51391683.0"
	I1127 23:43:16.522788 1489046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14606522.pem && ln -fs /usr/share/ca-certificates/14606522.pem /etc/ssl/certs/14606522.pem"
	I1127 23:43:16.534207 1489046 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14606522.pem
	I1127 23:43:16.538782 1489046 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:38 /usr/share/ca-certificates/14606522.pem
	I1127 23:43:16.538889 1489046 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14606522.pem
	I1127 23:43:16.547303 1489046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14606522.pem /etc/ssl/certs/3ec20f2e.0"
	I1127 23:43:16.558575 1489046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1127 23:43:16.569846 1489046 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:43:16.574439 1489046 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:31 /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:43:16.574512 1489046 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:43:16.583251 1489046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1127 23:43:16.594537 1489046 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1127 23:43:16.598924 1489046 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 23:43:16.599019 1489046 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-684553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-684553 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:43:16.599123 1489046 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1127 23:43:16.599181 1489046 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1127 23:43:16.639141 1489046 cri.go:89] found id: ""
	I1127 23:43:16.639257 1489046 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1127 23:43:16.649930 1489046 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1127 23:43:16.660600 1489046 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1127 23:43:16.660692 1489046 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1127 23:43:16.671299 1489046 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1127 23:43:16.671358 1489046 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1127 23:43:16.729339 1489046 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1127 23:43:16.729560 1489046 kubeadm.go:322] [preflight] Running pre-flight checks
	I1127 23:43:16.781082 1489046 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1127 23:43:16.781228 1489046 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1050-aws
	I1127 23:43:16.781283 1489046 kubeadm.go:322] OS: Linux
	I1127 23:43:16.781356 1489046 kubeadm.go:322] CGROUPS_CPU: enabled
	I1127 23:43:16.781433 1489046 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1127 23:43:16.781507 1489046 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1127 23:43:16.781583 1489046 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1127 23:43:16.781661 1489046 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1127 23:43:16.781740 1489046 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1127 23:43:16.870835 1489046 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1127 23:43:16.870950 1489046 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1127 23:43:16.871079 1489046 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1127 23:43:17.105178 1489046 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1127 23:43:17.106859 1489046 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1127 23:43:17.106993 1489046 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1127 23:43:17.210317 1489046 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1127 23:43:17.212556 1489046 out.go:204]   - Generating certificates and keys ...
	I1127 23:43:17.212685 1489046 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1127 23:43:17.212773 1489046 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1127 23:43:17.585756 1489046 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1127 23:43:18.160441 1489046 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1127 23:43:18.388533 1489046 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1127 23:43:19.050003 1489046 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1127 23:43:19.788694 1489046 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1127 23:43:19.789038 1489046 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-684553 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1127 23:43:20.688067 1489046 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1127 23:43:20.688458 1489046 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-684553 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1127 23:43:21.169947 1489046 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1127 23:43:21.392900 1489046 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1127 23:43:21.662711 1489046 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1127 23:43:21.663042 1489046 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1127 23:43:21.982204 1489046 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1127 23:43:22.574289 1489046 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1127 23:43:22.808737 1489046 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1127 23:43:23.346644 1489046 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1127 23:43:23.347655 1489046 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1127 23:43:23.350424 1489046 out.go:204]   - Booting up control plane ...
	I1127 23:43:23.350537 1489046 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1127 23:43:23.358738 1489046 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1127 23:43:23.360356 1489046 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1127 23:43:23.361493 1489046 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1127 23:43:23.364476 1489046 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1127 23:43:36.366931 1489046 kubeadm.go:322] [apiclient] All control plane components are healthy after 13.002322 seconds
	I1127 23:43:36.367051 1489046 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1127 23:43:36.380682 1489046 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1127 23:43:36.905031 1489046 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1127 23:43:36.905182 1489046 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-684553 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1127 23:43:37.413533 1489046 kubeadm.go:322] [bootstrap-token] Using token: fho9qx.flu8g0ebie5kzytj
	I1127 23:43:37.415594 1489046 out.go:204]   - Configuring RBAC rules ...
	I1127 23:43:37.415725 1489046 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1127 23:43:37.420542 1489046 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1127 23:43:37.429169 1489046 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1127 23:43:37.434229 1489046 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1127 23:43:37.436982 1489046 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1127 23:43:37.440539 1489046 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1127 23:43:37.449510 1489046 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1127 23:43:37.737739 1489046 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1127 23:43:37.864545 1489046 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1127 23:43:37.866020 1489046 kubeadm.go:322] 
	I1127 23:43:37.866091 1489046 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1127 23:43:37.866102 1489046 kubeadm.go:322] 
	I1127 23:43:37.866182 1489046 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1127 23:43:37.866192 1489046 kubeadm.go:322] 
	I1127 23:43:37.866237 1489046 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1127 23:43:37.866335 1489046 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1127 23:43:37.866392 1489046 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1127 23:43:37.866400 1489046 kubeadm.go:322] 
	I1127 23:43:37.866449 1489046 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1127 23:43:37.866521 1489046 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1127 23:43:37.866591 1489046 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1127 23:43:37.866599 1489046 kubeadm.go:322] 
	I1127 23:43:37.866678 1489046 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1127 23:43:37.866753 1489046 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1127 23:43:37.866761 1489046 kubeadm.go:322] 
	I1127 23:43:37.866839 1489046 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token fho9qx.flu8g0ebie5kzytj \
	I1127 23:43:37.866941 1489046 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:4bf7580b26a5e006cb8545a36d546acc708a5c0d8ea7cd28dd99f58e9fcb6509 \
	I1127 23:43:37.866968 1489046 kubeadm.go:322]     --control-plane 
	I1127 23:43:37.866973 1489046 kubeadm.go:322] 
	I1127 23:43:37.867052 1489046 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1127 23:43:37.867115 1489046 kubeadm.go:322] 
	I1127 23:43:37.867197 1489046 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token fho9qx.flu8g0ebie5kzytj \
	I1127 23:43:37.867321 1489046 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:4bf7580b26a5e006cb8545a36d546acc708a5c0d8ea7cd28dd99f58e9fcb6509 
	I1127 23:43:37.870464 1489046 kubeadm.go:322] W1127 23:43:16.728161    1238 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1127 23:43:37.870678 1489046 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1050-aws\n", err: exit status 1
	I1127 23:43:37.870780 1489046 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1127 23:43:37.870901 1489046 kubeadm.go:322] W1127 23:43:23.358682    1238 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1127 23:43:37.871020 1489046 kubeadm.go:322] W1127 23:43:23.360269    1238 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1127 23:43:37.871039 1489046 cni.go:84] Creating CNI manager for ""
	I1127 23:43:37.871051 1489046 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 23:43:37.874451 1489046 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1127 23:43:37.876176 1489046 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1127 23:43:37.882838 1489046 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1127 23:43:37.882860 1489046 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1127 23:43:37.905283 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1127 23:43:38.339470 1489046 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1127 23:43:38.339610 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:38.339714 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=ingress-addon-legacy-684553 minikube.k8s.io/updated_at=2023_11_27T23_43_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:38.500314 1489046 ops.go:34] apiserver oom_adj: -16
	I1127 23:43:38.500404 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:38.599722 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:39.198198 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:39.697634 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:40.198456 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:40.697953 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:41.197695 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:41.698080 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:42.198730 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:42.697671 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:43.197696 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:43.698556 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:44.198472 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:44.698277 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:45.197699 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:45.698390 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:46.197983 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:46.698452 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:47.197743 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:47.698534 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:48.198404 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:48.697772 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:49.197805 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:49.698446 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:50.198615 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:50.697724 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:51.198008 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:51.698171 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:52.198481 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:52.697700 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:53.198302 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:53.697993 1489046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:43:53.826826 1489046 kubeadm.go:1081] duration metric: took 15.487266864s to wait for elevateKubeSystemPrivileges.
	I1127 23:43:53.826863 1489046 kubeadm.go:406] StartCluster complete in 37.227841012s
	I1127 23:43:53.826880 1489046 settings.go:142] acquiring lock: {Name:mk2effde19f5a08dd61e438cec70b0751f0f2f09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:43:53.826938 1489046 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-1455288/kubeconfig
	I1127 23:43:53.827691 1489046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/kubeconfig: {Name:mk024e2b9ecd216772e0b17d0d1d16e859027716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:43:53.828401 1489046 kapi.go:59] client config for ingress-addon-legacy-684553: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.key", CAFile:"/home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:43:53.829513 1489046 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1127 23:43:53.829619 1489046 cert_rotation.go:137] Starting client certificate rotation controller
	I1127 23:43:53.829596 1489046 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1127 23:43:53.829657 1489046 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-684553"
	I1127 23:43:53.829672 1489046 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-684553"
	I1127 23:43:53.829731 1489046 host.go:66] Checking if "ingress-addon-legacy-684553" exists ...
	I1127 23:43:53.829845 1489046 config.go:182] Loaded profile config "ingress-addon-legacy-684553": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1127 23:43:53.829910 1489046 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-684553"
	I1127 23:43:53.829925 1489046 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-684553"
	I1127 23:43:53.830185 1489046 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-684553 --format={{.State.Status}}
	I1127 23:43:53.830222 1489046 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-684553 --format={{.State.Status}}
	I1127 23:43:53.890196 1489046 kapi.go:59] client config for ingress-addon-legacy-684553: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.key", CAFile:"/home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:43:53.892648 1489046 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:43:53.891358 1489046 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-684553"
	I1127 23:43:53.894403 1489046 host.go:66] Checking if "ingress-addon-legacy-684553" exists ...
	I1127 23:43:53.894878 1489046 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-684553 --format={{.State.Status}}
	I1127 23:43:53.895143 1489046 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 23:43:53.895157 1489046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1127 23:43:53.895202 1489046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-684553
	I1127 23:43:53.934162 1489046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34084 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/ingress-addon-legacy-684553/id_rsa Username:docker}
	I1127 23:43:53.939567 1489046 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1127 23:43:53.939599 1489046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1127 23:43:53.939661 1489046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-684553
	I1127 23:43:53.965579 1489046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34084 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/ingress-addon-legacy-684553/id_rsa Username:docker}
	I1127 23:43:53.983995 1489046 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-684553" context rescaled to 1 replicas
	I1127 23:43:53.984048 1489046 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1127 23:43:53.985736 1489046 out.go:177] * Verifying Kubernetes components...
	I1127 23:43:53.987698 1489046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:43:54.041031 1489046 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1127 23:43:54.041821 1489046 kapi.go:59] client config for ingress-addon-legacy-684553: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.key", CAFile:"/home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:43:54.042271 1489046 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-684553" to be "Ready" ...
	I1127 23:43:54.144199 1489046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1127 23:43:54.183863 1489046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 23:43:54.515366 1489046 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1127 23:43:54.693803 1489046 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1127 23:43:54.703135 1489046 addons.go:502] enable addons completed in 873.527167ms: enabled=[default-storageclass storage-provisioner]
	I1127 23:43:56.057574 1489046 node_ready.go:58] node "ingress-addon-legacy-684553" has status "Ready":"False"
	I1127 23:43:58.058515 1489046 node_ready.go:58] node "ingress-addon-legacy-684553" has status "Ready":"False"
	I1127 23:44:00.062565 1489046 node_ready.go:58] node "ingress-addon-legacy-684553" has status "Ready":"False"
	I1127 23:44:01.558681 1489046 node_ready.go:49] node "ingress-addon-legacy-684553" has status "Ready":"True"
	I1127 23:44:01.558709 1489046 node_ready.go:38] duration metric: took 7.516395004s waiting for node "ingress-addon-legacy-684553" to be "Ready" ...
	I1127 23:44:01.558719 1489046 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:44:01.566007 1489046 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-rbcrv" in "kube-system" namespace to be "Ready" ...
	I1127 23:44:03.588528 1489046 pod_ready.go:102] pod "coredns-66bff467f8-rbcrv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-27 23:43:54 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1127 23:44:06.089368 1489046 pod_ready.go:102] pod "coredns-66bff467f8-rbcrv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-27 23:43:54 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1127 23:44:08.091942 1489046 pod_ready.go:102] pod "coredns-66bff467f8-rbcrv" in "kube-system" namespace has status "Ready":"False"
	I1127 23:44:09.091378 1489046 pod_ready.go:92] pod "coredns-66bff467f8-rbcrv" in "kube-system" namespace has status "Ready":"True"
	I1127 23:44:09.091406 1489046 pod_ready.go:81] duration metric: took 7.525370245s waiting for pod "coredns-66bff467f8-rbcrv" in "kube-system" namespace to be "Ready" ...
	I1127 23:44:09.091418 1489046 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-684553" in "kube-system" namespace to be "Ready" ...
	I1127 23:44:09.096362 1489046 pod_ready.go:92] pod "etcd-ingress-addon-legacy-684553" in "kube-system" namespace has status "Ready":"True"
	I1127 23:44:09.096393 1489046 pod_ready.go:81] duration metric: took 4.961821ms waiting for pod "etcd-ingress-addon-legacy-684553" in "kube-system" namespace to be "Ready" ...
	I1127 23:44:09.096428 1489046 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-684553" in "kube-system" namespace to be "Ready" ...
	I1127 23:44:09.101968 1489046 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-684553" in "kube-system" namespace has status "Ready":"True"
	I1127 23:44:09.101997 1489046 pod_ready.go:81] duration metric: took 5.55466ms waiting for pod "kube-apiserver-ingress-addon-legacy-684553" in "kube-system" namespace to be "Ready" ...
	I1127 23:44:09.102011 1489046 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-684553" in "kube-system" namespace to be "Ready" ...
	I1127 23:44:09.107087 1489046 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-684553" in "kube-system" namespace has status "Ready":"True"
	I1127 23:44:09.107110 1489046 pod_ready.go:81] duration metric: took 5.090697ms waiting for pod "kube-controller-manager-ingress-addon-legacy-684553" in "kube-system" namespace to be "Ready" ...
	I1127 23:44:09.107125 1489046 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h29qg" in "kube-system" namespace to be "Ready" ...
	I1127 23:44:09.112085 1489046 pod_ready.go:92] pod "kube-proxy-h29qg" in "kube-system" namespace has status "Ready":"True"
	I1127 23:44:09.112112 1489046 pod_ready.go:81] duration metric: took 4.979962ms waiting for pod "kube-proxy-h29qg" in "kube-system" namespace to be "Ready" ...
	I1127 23:44:09.112125 1489046 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-684553" in "kube-system" namespace to be "Ready" ...
	I1127 23:44:09.286491 1489046 request.go:629] Waited for 174.287393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-684553
	I1127 23:44:09.486598 1489046 request.go:629] Waited for 197.300199ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-684553
	I1127 23:44:09.489333 1489046 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-684553" in "kube-system" namespace has status "Ready":"True"
	I1127 23:44:09.489358 1489046 pod_ready.go:81] duration metric: took 377.224902ms waiting for pod "kube-scheduler-ingress-addon-legacy-684553" in "kube-system" namespace to be "Ready" ...
	I1127 23:44:09.489372 1489046 pod_ready.go:38] duration metric: took 7.930635651s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:44:09.489386 1489046 api_server.go:52] waiting for apiserver process to appear ...
	I1127 23:44:09.489461 1489046 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 23:44:09.502397 1489046 api_server.go:72] duration metric: took 15.518314873s to wait for apiserver process to appear ...
	I1127 23:44:09.502422 1489046 api_server.go:88] waiting for apiserver healthz status ...
	I1127 23:44:09.502438 1489046 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1127 23:44:09.511304 1489046 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1127 23:44:09.512206 1489046 api_server.go:141] control plane version: v1.18.20
	I1127 23:44:09.512231 1489046 api_server.go:131] duration metric: took 9.802592ms to wait for apiserver health ...
	I1127 23:44:09.512242 1489046 system_pods.go:43] waiting for kube-system pods to appear ...
	I1127 23:44:09.686618 1489046 request.go:629] Waited for 174.293416ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1127 23:44:09.694882 1489046 system_pods.go:59] 8 kube-system pods found
	I1127 23:44:09.694968 1489046 system_pods.go:61] "coredns-66bff467f8-rbcrv" [7c48f479-5740-4b2c-9512-3d04b1265ed7] Running
	I1127 23:44:09.694987 1489046 system_pods.go:61] "etcd-ingress-addon-legacy-684553" [c0a36600-1950-4938-a5cb-456d86cc712e] Running
	I1127 23:44:09.695006 1489046 system_pods.go:61] "kindnet-68g2m" [fda7c862-56be-4d7c-82ba-bf256d4c6c90] Running
	I1127 23:44:09.695036 1489046 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-684553" [c944250f-dbf1-4f8e-9f28-27b1c60d9577] Running
	I1127 23:44:09.695060 1489046 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-684553" [fa1b4e82-2ddf-4c37-8d42-18c633ee086b] Running
	I1127 23:44:09.695078 1489046 system_pods.go:61] "kube-proxy-h29qg" [5a0c469f-4218-4b81-91cf-6681d0bd6f89] Running
	I1127 23:44:09.695096 1489046 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-684553" [be40e7b6-bd68-4dc0-8e9f-f2cbb84e3b7b] Running
	I1127 23:44:09.695113 1489046 system_pods.go:61] "storage-provisioner" [e4a72965-95bb-4a32-9f04-35ff19526292] Running
	I1127 23:44:09.695138 1489046 system_pods.go:74] duration metric: took 182.889472ms to wait for pod list to return data ...
	I1127 23:44:09.695165 1489046 default_sa.go:34] waiting for default service account to be created ...
	I1127 23:44:09.886572 1489046 request.go:629] Waited for 191.308858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1127 23:44:09.889012 1489046 default_sa.go:45] found service account: "default"
	I1127 23:44:09.889040 1489046 default_sa.go:55] duration metric: took 193.858403ms for default service account to be created ...
	I1127 23:44:09.889054 1489046 system_pods.go:116] waiting for k8s-apps to be running ...
	I1127 23:44:10.086474 1489046 request.go:629] Waited for 197.338188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1127 23:44:10.093123 1489046 system_pods.go:86] 8 kube-system pods found
	I1127 23:44:10.093160 1489046 system_pods.go:89] "coredns-66bff467f8-rbcrv" [7c48f479-5740-4b2c-9512-3d04b1265ed7] Running
	I1127 23:44:10.093167 1489046 system_pods.go:89] "etcd-ingress-addon-legacy-684553" [c0a36600-1950-4938-a5cb-456d86cc712e] Running
	I1127 23:44:10.093172 1489046 system_pods.go:89] "kindnet-68g2m" [fda7c862-56be-4d7c-82ba-bf256d4c6c90] Running
	I1127 23:44:10.093178 1489046 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-684553" [c944250f-dbf1-4f8e-9f28-27b1c60d9577] Running
	I1127 23:44:10.093183 1489046 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-684553" [fa1b4e82-2ddf-4c37-8d42-18c633ee086b] Running
	I1127 23:44:10.093188 1489046 system_pods.go:89] "kube-proxy-h29qg" [5a0c469f-4218-4b81-91cf-6681d0bd6f89] Running
	I1127 23:44:10.093230 1489046 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-684553" [be40e7b6-bd68-4dc0-8e9f-f2cbb84e3b7b] Running
	I1127 23:44:10.093243 1489046 system_pods.go:89] "storage-provisioner" [e4a72965-95bb-4a32-9f04-35ff19526292] Running
	I1127 23:44:10.093250 1489046 system_pods.go:126] duration metric: took 204.190648ms to wait for k8s-apps to be running ...
	I1127 23:44:10.093259 1489046 system_svc.go:44] waiting for kubelet service to be running ....
	I1127 23:44:10.093332 1489046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:44:10.109068 1489046 system_svc.go:56] duration metric: took 15.796459ms WaitForService to wait for kubelet.
	I1127 23:44:10.109098 1489046 kubeadm.go:581] duration metric: took 16.125022605s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1127 23:44:10.109119 1489046 node_conditions.go:102] verifying NodePressure condition ...
	I1127 23:44:10.286454 1489046 request.go:629] Waited for 177.24395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1127 23:44:10.289292 1489046 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1127 23:44:10.289323 1489046 node_conditions.go:123] node cpu capacity is 2
	I1127 23:44:10.289335 1489046 node_conditions.go:105] duration metric: took 180.210977ms to run NodePressure ...
	I1127 23:44:10.289365 1489046 start.go:228] waiting for startup goroutines ...
	I1127 23:44:10.289376 1489046 start.go:233] waiting for cluster config update ...
	I1127 23:44:10.289386 1489046 start.go:242] writing updated cluster config ...
	I1127 23:44:10.289672 1489046 ssh_runner.go:195] Run: rm -f paused
	I1127 23:44:10.347480 1489046 start.go:600] kubectl: 1.28.4, cluster: 1.18.20 (minor skew: 10)
	I1127 23:44:10.349531 1489046 out.go:177] 
	W1127 23:44:10.351256 1489046 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.18.20.
	I1127 23:44:10.352886 1489046 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1127 23:44:10.354615 1489046 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-684553" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 27 23:47:15 ingress-addon-legacy-684553 crio[904]: time="2023-11-27 23:47:15.217265675Z" level=info msg="Stopping container: 7f8c53a5491c085bf1d8df325155ec6cfaeba4c838a4fd65301b35bd903907b6 (timeout: 2s)" id=be3fcaae-d7bd-476e-8ebc-d96f495b93fd name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 27 23:47:15 ingress-addon-legacy-684553 crio[904]: time="2023-11-27 23:47:15.223606051Z" level=info msg="Stopping container: 7f8c53a5491c085bf1d8df325155ec6cfaeba4c838a4fd65301b35bd903907b6 (timeout: 2s)" id=bcb422cf-7169-4fdf-8d3f-996c9179ce6c name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 27 23:47:15 ingress-addon-legacy-684553 crio[904]: time="2023-11-27 23:47:15.244130156Z" level=info msg="Removing container: fce98dda2ddd20b643bdc8aa3adc13942805138d78bbde3946f87ded2eb70c06" id=d23c92da-be29-4a59-81a4-a8aad24c6f22 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Nov 27 23:47:15 ingress-addon-legacy-684553 crio[904]: time="2023-11-27 23:47:15.254734162Z" level=info msg="Stopping pod sandbox: c7e3e9165032db428c2ece96756ca89002049444f272428ce1af36dbc26a79b1" id=f5ab77cb-5259-467c-8b0e-d03f3566e1ad name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 27 23:47:15 ingress-addon-legacy-684553 crio[904]: time="2023-11-27 23:47:15.254781308Z" level=info msg="Stopped pod sandbox (already stopped): c7e3e9165032db428c2ece96756ca89002049444f272428ce1af36dbc26a79b1" id=f5ab77cb-5259-467c-8b0e-d03f3566e1ad name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 27 23:47:15 ingress-addon-legacy-684553 crio[904]: time="2023-11-27 23:47:15.269191455Z" level=info msg="Removed container fce98dda2ddd20b643bdc8aa3adc13942805138d78bbde3946f87ded2eb70c06: default/hello-world-app-5f5d8b66bb-zwrwz/hello-world-app" id=d23c92da-be29-4a59-81a4-a8aad24c6f22 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Nov 27 23:47:17 ingress-addon-legacy-684553 crio[904]: time="2023-11-27 23:47:17.237593874Z" level=warning msg="Stopping container 7f8c53a5491c085bf1d8df325155ec6cfaeba4c838a4fd65301b35bd903907b6 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=be3fcaae-d7bd-476e-8ebc-d96f495b93fd name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 27 23:47:17 ingress-addon-legacy-684553 conmon[2734]: conmon 7f8c53a5491c085bf1d8 <ninfo>: container 2745 exited with status 137
	Nov 27 23:47:17 ingress-addon-legacy-684553 crio[904]: time="2023-11-27 23:47:17.418257641Z" level=info msg="Stopped container 7f8c53a5491c085bf1d8df325155ec6cfaeba4c838a4fd65301b35bd903907b6: ingress-nginx/ingress-nginx-controller-7fcf777cb7-f6htr/controller" id=bcb422cf-7169-4fdf-8d3f-996c9179ce6c name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 27 23:47:17 ingress-addon-legacy-684553 crio[904]: time="2023-11-27 23:47:17.418901515Z" level=info msg="Stopping pod sandbox: a18c4d2ac6f90ac62c48d1bff265dc8ecccf67664b7b4d2518c79610ff5214c4" id=5ec2db9f-28ed-4b34-b3ab-da8ef0f5c940 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 27 23:47:17 ingress-addon-legacy-684553 crio[904]: time="2023-11-27 23:47:17.420345891Z" level=info msg="Stopped container 7f8c53a5491c085bf1d8df325155ec6cfaeba4c838a4fd65301b35bd903907b6: ingress-nginx/ingress-nginx-controller-7fcf777cb7-f6htr/controller" id=be3fcaae-d7bd-476e-8ebc-d96f495b93fd name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 27 23:47:17 ingress-addon-legacy-684553 crio[904]: time="2023-11-27 23:47:17.420942701Z" level=info msg="Stopping pod sandbox: a18c4d2ac6f90ac62c48d1bff265dc8ecccf67664b7b4d2518c79610ff5214c4" id=6c3cae5f-c38d-4ad8-a34c-bb08f3196f74 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 27 23:47:17 ingress-addon-legacy-684553 crio[904]: time="2023-11-27 23:47:17.422350113Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-DXKXOVBGR544SBU4 - [0:0]\n:KUBE-HP-AFXBYAH6IIWKRFB7 - [0:0]\n-X KUBE-HP-AFXBYAH6IIWKRFB7\n-X KUBE-HP-DXKXOVBGR544SBU4\nCOMMIT\n"
	Nov 27 23:47:17 ingress-addon-legacy-684553 crio[904]: time="2023-11-27 23:47:17.423944001Z" level=info msg="Closing host port tcp:80"
	Nov 27 23:47:17 ingress-addon-legacy-684553 crio[904]: time="2023-11-27 23:47:17.423991976Z" level=info msg="Closing host port tcp:443"
	Nov 27 23:47:17 ingress-addon-legacy-684553 crio[904]: time="2023-11-27 23:47:17.425207627Z" level=info msg="Host port tcp:80 does not have an open socket"
	Nov 27 23:47:17 ingress-addon-legacy-684553 crio[904]: time="2023-11-27 23:47:17.425234318Z" level=info msg="Host port tcp:443 does not have an open socket"
	Nov 27 23:47:17 ingress-addon-legacy-684553 crio[904]: time="2023-11-27 23:47:17.425384052Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-f6htr Namespace:ingress-nginx ID:a18c4d2ac6f90ac62c48d1bff265dc8ecccf67664b7b4d2518c79610ff5214c4 UID:d136fa23-4e4c-4427-ac14-054177890409 NetNS:/var/run/netns/c0b509d9-0f48-449c-b7f6-9fee501d84d1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 27 23:47:17 ingress-addon-legacy-684553 crio[904]: time="2023-11-27 23:47:17.425530840Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-f6htr from CNI network \"kindnet\" (type=ptp)"
	Nov 27 23:47:17 ingress-addon-legacy-684553 crio[904]: time="2023-11-27 23:47:17.441753318Z" level=info msg="Stopped pod sandbox: a18c4d2ac6f90ac62c48d1bff265dc8ecccf67664b7b4d2518c79610ff5214c4" id=5ec2db9f-28ed-4b34-b3ab-da8ef0f5c940 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 27 23:47:17 ingress-addon-legacy-684553 crio[904]: time="2023-11-27 23:47:17.441892697Z" level=info msg="Stopped pod sandbox (already stopped): a18c4d2ac6f90ac62c48d1bff265dc8ecccf67664b7b4d2518c79610ff5214c4" id=6c3cae5f-c38d-4ad8-a34c-bb08f3196f74 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 27 23:47:19 ingress-addon-legacy-684553 crio[904]: time="2023-11-27 23:47:19.255539735Z" level=info msg="Stopping container: 7f8c53a5491c085bf1d8df325155ec6cfaeba4c838a4fd65301b35bd903907b6 (timeout: 2s)" id=3ed7d4b1-76fe-4f2c-aafc-4db66ff92ff4 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 27 23:47:19 ingress-addon-legacy-684553 crio[904]: time="2023-11-27 23:47:19.258373397Z" level=info msg="Stopped container 7f8c53a5491c085bf1d8df325155ec6cfaeba4c838a4fd65301b35bd903907b6: ingress-nginx/ingress-nginx-controller-7fcf777cb7-f6htr/controller" id=3ed7d4b1-76fe-4f2c-aafc-4db66ff92ff4 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 27 23:47:19 ingress-addon-legacy-684553 crio[904]: time="2023-11-27 23:47:19.258880928Z" level=info msg="Stopping pod sandbox: a18c4d2ac6f90ac62c48d1bff265dc8ecccf67664b7b4d2518c79610ff5214c4" id=d1234249-3e33-4c1f-927b-e7374a432e4d name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 27 23:47:19 ingress-addon-legacy-684553 crio[904]: time="2023-11-27 23:47:19.258914651Z" level=info msg="Stopped pod sandbox (already stopped): a18c4d2ac6f90ac62c48d1bff265dc8ecccf67664b7b4d2518c79610ff5214c4" id=d1234249-3e33-4c1f-927b-e7374a432e4d name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2becc930340a7       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                   8 seconds ago       Exited              hello-world-app           2                   2958d9d6791dd       hello-world-app-5f5d8b66bb-zwrwz
	49fbb4501b90f       docker.io/library/nginx@sha256:b7537eea6ffa4f00aac311f16654b50736328eb370208c68b6649a97b7a2724b                    2 minutes ago       Running             nginx                     0                   e05a695a051f2       nginx
	7f8c53a5491c0       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   a18c4d2ac6f90       ingress-nginx-controller-7fcf777cb7-f6htr
	e771c3118ea63       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              patch                     0                   8c95300d4b2a0       ingress-nginx-admission-patch-fb278
	b02d7e9333746       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   bcdde8726d7e5       ingress-nginx-admission-create-8src7
	f273339ed07c2       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   3bef90b5ba56e       coredns-66bff467f8-rbcrv
	3402b457aff01       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   1de679615d9e4       storage-provisioner
	b8390a345c610       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   1e091358254e0       kindnet-68g2m
	9435ffa042786       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   765fa6c593d04       kube-proxy-h29qg
	22df5594ef923       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   3 minutes ago       Running             etcd                      0                   5ab7383464897       etcd-ingress-addon-legacy-684553
	6c68aad412239       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   3 minutes ago       Running             kube-scheduler            0                   ead33fe8332e5       kube-scheduler-ingress-addon-legacy-684553
	1acf25e8249c5       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   3 minutes ago       Running             kube-apiserver            0                   64685f928cebf       kube-apiserver-ingress-addon-legacy-684553
	1e5be3fa80045       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   3 minutes ago       Running             kube-controller-manager   0                   19afbfbb58140       kube-controller-manager-ingress-addon-legacy-684553
	
	* 
	* ==> coredns [f273339ed07c2b586b5bfdcabb4a67a596deae0cba9b99512e6f9bcbc2eb3297] <==
	* [INFO] 10.244.0.5:33534 - 8539 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000062408s
	[INFO] 10.244.0.5:33534 - 12790 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002105514s
	[INFO] 10.244.0.5:37315 - 2456 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001500383s
	[INFO] 10.244.0.5:33534 - 41974 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00137844s
	[INFO] 10.244.0.5:33534 - 22774 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000138017s
	[INFO] 10.244.0.5:37315 - 47434 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000887876s
	[INFO] 10.244.0.5:37315 - 57756 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000039121s
	[INFO] 10.244.0.5:45876 - 14690 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000071647s
	[INFO] 10.244.0.5:41905 - 3620 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000045366s
	[INFO] 10.244.0.5:45876 - 18996 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00011053s
	[INFO] 10.244.0.5:45876 - 41831 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035586s
	[INFO] 10.244.0.5:45876 - 24423 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000031712s
	[INFO] 10.244.0.5:45876 - 32782 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034797s
	[INFO] 10.244.0.5:45876 - 38805 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000055121s
	[INFO] 10.244.0.5:41905 - 13761 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000116799s
	[INFO] 10.244.0.5:41905 - 16891 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.0000517s
	[INFO] 10.244.0.5:41905 - 62837 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040434s
	[INFO] 10.244.0.5:45876 - 7932 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001596193s
	[INFO] 10.244.0.5:41905 - 2388 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000084175s
	[INFO] 10.244.0.5:41905 - 7368 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040319s
	[INFO] 10.244.0.5:45876 - 24035 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000993845s
	[INFO] 10.244.0.5:45876 - 3146 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000151457s
	[INFO] 10.244.0.5:41905 - 6354 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001103899s
	[INFO] 10.244.0.5:41905 - 46248 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001644406s
	[INFO] 10.244.0.5:41905 - 4175 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000078629s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-684553
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-684553
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45
	                    minikube.k8s.io/name=ingress-addon-legacy-684553
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_27T23_43_38_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Nov 2023 23:43:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-684553
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Nov 2023 23:47:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Nov 2023 23:47:11 +0000   Mon, 27 Nov 2023 23:43:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Nov 2023 23:47:11 +0000   Mon, 27 Nov 2023 23:43:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Nov 2023 23:47:11 +0000   Mon, 27 Nov 2023 23:43:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Nov 2023 23:47:11 +0000   Mon, 27 Nov 2023 23:44:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-684553
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 e80ce5026eb4499f950b9d075ce957d5
	  System UUID:                6d3e0114-f410-4d51-b8b6-c9a5804a1068
	  Boot ID:                    eb10cf4d-5884-4052-85dd-9e7b7999f82d
	  Kernel Version:             5.15.0-1050-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-zwrwz                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 coredns-66bff467f8-rbcrv                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m29s
	  kube-system                 etcd-ingress-addon-legacy-684553                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 kindnet-68g2m                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m29s
	  kube-system                 kube-apiserver-ingress-addon-legacy-684553             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-684553    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 kube-proxy-h29qg                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kube-scheduler-ingress-addon-legacy-684553             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m57s (x4 over 3m57s)  kubelet     Node ingress-addon-legacy-684553 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m57s (x5 over 3m57s)  kubelet     Node ingress-addon-legacy-684553 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m57s (x4 over 3m57s)  kubelet     Node ingress-addon-legacy-684553 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m42s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m42s                  kubelet     Node ingress-addon-legacy-684553 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m42s                  kubelet     Node ingress-addon-legacy-684553 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m42s                  kubelet     Node ingress-addon-legacy-684553 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m28s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m22s                  kubelet     Node ingress-addon-legacy-684553 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001173] FS-Cache: O-key=[8] '7bd7c90000000000'
	[  +0.000758] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001002] FS-Cache: N-cookie d=00000000fe658175{9p.inode} n=00000000124770db
	[  +0.001086] FS-Cache: N-key=[8] '7bd7c90000000000'
	[  +2.367044] FS-Cache: Duplicate cookie detected
	[  +0.000741] FS-Cache: O-cookie c=0000004d [p=0000004b fl=226 nc=0 na=1]
	[  +0.001026] FS-Cache: O-cookie d=00000000fe658175{9p.inode} n=0000000014c47df7
	[  +0.001148] FS-Cache: O-key=[8] '7ad7c90000000000'
	[  +0.000733] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001050] FS-Cache: N-cookie d=00000000fe658175{9p.inode} n=00000000ce1a5764
	[  +0.001129] FS-Cache: N-key=[8] '7ad7c90000000000'
	[  +0.423214] FS-Cache: Duplicate cookie detected
	[  +0.000771] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001066] FS-Cache: O-cookie d=00000000fe658175{9p.inode} n=00000000c7b9da3e
	[  +0.001094] FS-Cache: O-key=[8] '80d7c90000000000'
	[  +0.000747] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=00000000fe658175{9p.inode} n=00000000e29de338
	[  +0.001127] FS-Cache: N-key=[8] '80d7c90000000000'
	[  +4.315058] FS-Cache: Duplicate cookie detected
	[  +0.000758] FS-Cache: O-cookie c=0000005a [p=00000002 fl=222 nc=0 na=1]
	[  +0.001009] FS-Cache: O-cookie d=000000006e56f75c{9P.session} n=00000000db26fcaf
	[  +0.001116] FS-Cache: O-key=[10] '34333030363632333434'
	[  +0.000817] FS-Cache: N-cookie c=0000005b [p=00000002 fl=2 nc=0 na=1]
	[  +0.001032] FS-Cache: N-cookie d=000000006e56f75c{9P.session} n=00000000f4cdfc72
	[  +0.001142] FS-Cache: N-key=[10] '34333030363632333434'
	
	* 
	* ==> etcd [22df5594ef923279fd3ba658c993a2a05911c37ac3ad1c656915c450b480d26d] <==
	* raft2023/11/27 23:43:30 INFO: aec36adc501070cc became follower at term 0
	raft2023/11/27 23:43:30 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/11/27 23:43:30 INFO: aec36adc501070cc became follower at term 1
	raft2023/11/27 23:43:30 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-27 23:43:30.243730 W | auth: simple token is not cryptographically signed
	2023-11-27 23:43:30.340195 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-11-27 23:43:30.347125 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/11/27 23:43:30 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-27 23:43:30.397997 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-11-27 23:43:30.550279 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-27 23:43:30.550474 I | embed: listening for peers on 192.168.49.2:2380
	2023-11-27 23:43:30.550626 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/11/27 23:43:30 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/11/27 23:43:30 INFO: aec36adc501070cc became candidate at term 2
	raft2023/11/27 23:43:30 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/11/27 23:43:30 INFO: aec36adc501070cc became leader at term 2
	raft2023/11/27 23:43:30 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-11-27 23:43:30.890767 I | etcdserver: published {Name:ingress-addon-legacy-684553 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-11-27 23:43:30.890935 I | embed: ready to serve client requests
	2023-11-27 23:43:30.892502 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-27 23:43:30.892676 I | embed: ready to serve client requests
	2023-11-27 23:43:30.893997 I | embed: serving client requests on 192.168.49.2:2379
	2023-11-27 23:43:30.896530 I | etcdserver: setting up the initial cluster version to 3.4
	2023-11-27 23:43:30.897184 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-11-27 23:43:30.897269 I | etcdserver/api: enabled capabilities for version 3.4
	
	* 
	* ==> kernel <==
	*  23:47:23 up  6:29,  0 users,  load average: 0.22, 1.10, 2.04
	Linux ingress-addon-legacy-684553 5.15.0-1050-aws #55~20.04.1-Ubuntu SMP Mon Nov 6 12:18:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [b8390a345c6104be990912827ca4ccf997668578e637b698d2d3dc016e3a13e1] <==
	* I1127 23:45:17.378799       1 main.go:227] handling current node
	I1127 23:45:27.390350       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:45:27.390378       1 main.go:227] handling current node
	I1127 23:45:37.400988       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:45:37.401020       1 main.go:227] handling current node
	I1127 23:45:47.404207       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:45:47.404245       1 main.go:227] handling current node
	I1127 23:45:57.407713       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:45:57.407745       1 main.go:227] handling current node
	I1127 23:46:07.410969       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:46:07.410999       1 main.go:227] handling current node
	I1127 23:46:17.421824       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:46:17.421884       1 main.go:227] handling current node
	I1127 23:46:27.430449       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:46:27.430478       1 main.go:227] handling current node
	I1127 23:46:37.441374       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:46:37.441406       1 main.go:227] handling current node
	I1127 23:46:47.451897       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:46:47.451924       1 main.go:227] handling current node
	I1127 23:46:57.456992       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:46:57.457020       1 main.go:227] handling current node
	I1127 23:47:07.467632       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:47:07.467662       1 main.go:227] handling current node
	I1127 23:47:17.477499       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1127 23:47:17.477535       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [1acf25e8249c5381d47a9f8c850bdb43e5288a5fe8d06b35698cbdc10440838a] <==
	* E1127 23:43:34.855816       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1127 23:43:34.859962       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1127 23:43:34.860008       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1127 23:43:34.872342       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1127 23:43:34.872415       1 cache.go:39] Caches are synced for autoregister controller
	I1127 23:43:34.872811       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1127 23:43:35.655694       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1127 23:43:35.655839       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1127 23:43:35.662143       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1127 23:43:35.665913       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1127 23:43:35.666004       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1127 23:43:36.152496       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1127 23:43:36.192407       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1127 23:43:36.293480       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1127 23:43:36.294633       1 controller.go:609] quota admission added evaluator for: endpoints
	I1127 23:43:36.298548       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1127 23:43:37.103269       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1127 23:43:37.722835       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1127 23:43:37.840499       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1127 23:43:41.203073       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1127 23:43:54.100977       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1127 23:43:54.271818       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1127 23:44:11.302539       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1127 23:44:36.505755       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1127 23:47:14.265724       1 watch.go:251] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*http2.responseWriter)(0x4008cd9318), encoder:(*versioning.codec)(0x400dc6fd60), buf:(*bytes.Buffer)(0x400a595ad0)})
	
	* 
	* ==> kube-controller-manager [1e5be3fa800458921210586da07ac63263a413e896442c320b2a4725a9b03dc2] <==
	* I1127 23:43:54.316532       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	W1127 23:43:54.316570       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-684553. Assuming now as a timestamp.
	I1127 23:43:54.316616       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I1127 23:43:54.316889       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I1127 23:43:54.319628       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-684553", UID:"f13f0941-aca9-43b3-bdf3-e52b0f391fd8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-684553 event: Registered Node ingress-addon-legacy-684553 in Controller
	I1127 23:43:54.339758       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I1127 23:43:54.342307       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1127 23:43:54.344995       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1127 23:43:54.382259       1 range_allocator.go:373] Set node ingress-addon-legacy-684553 PodCIDR to [10.244.0.0/24]
	I1127 23:43:54.477078       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"f029ebf4-4343-481b-8506-9255aaf136d2", APIVersion:"apps/v1", ResourceVersion:"208", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-68g2m
	I1127 23:43:54.482549       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"d4701c0c-bc37-4c04-8569-787bc85f8860", APIVersion:"apps/v1", ResourceVersion:"201", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-h29qg
	E1127 23:43:54.578155       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"f029ebf4-4343-481b-8506-9255aaf136d2", ResourceVersion:"208", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63836725418, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230809-80a64d96\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400181e4e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400181e500)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x400181e520), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*
int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400181e540), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI
:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400181e560), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVol
umeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400181e580), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDis
k:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), Sca
leIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230809-80a64d96", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400181e5a0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400181e5e0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.Re
sourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40014b24b0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001145718), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40002192d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40000b3b28)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001145760)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	E1127 23:43:54.632365       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"f029ebf4-4343-481b-8506-9255aaf136d2", ResourceVersion:"348", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63836725418, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230809-80a64d96\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001c26e00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001c26e20)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001c26e40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001c26e60)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001c26e80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"",
UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001c26ea0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*
v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001c26ec0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStore
VolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.
CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001c26ee0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*
v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230809-80a64d96", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001c26f00)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001c26f40)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10
0m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1
.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001c34460), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001c099f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004e2070), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Tolera
tion{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4001283c28)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001c09a40)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please a
pply your changes to the latest version and try again
	I1127 23:44:04.317170       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1127 23:44:11.277938       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"753d0959-558f-42ec-b1b5-24c4dffabc87", APIVersion:"apps/v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1127 23:44:11.285917       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"b8fe06a7-7f88-46a6-b1c2-c282ebbd918d", APIVersion:"apps/v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-f6htr
	I1127 23:44:11.324399       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"3fb5c108-d36d-42e1-8467-f3f06b78b5ce", APIVersion:"batch/v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-8src7
	I1127 23:44:11.398381       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"e224f134-c5f9-4b50-83ec-5ea53669a5c1", APIVersion:"batch/v1", ResourceVersion:"476", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-fb278
	I1127 23:44:15.395698       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"3fb5c108-d36d-42e1-8467-f3f06b78b5ce", APIVersion:"batch/v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1127 23:44:15.414148       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"e224f134-c5f9-4b50-83ec-5ea53669a5c1", APIVersion:"batch/v1", ResourceVersion:"486", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1127 23:46:55.870646       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"6d5db406-a9cc-441c-9cb3-ea3d147a5641", APIVersion:"apps/v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1127 23:46:55.906754       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"08092c75-0d89-41ef-8e1b-80fa27644b49", APIVersion:"apps/v1", ResourceVersion:"700", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-zwrwz
	E1127 23:47:19.934848       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-llql5" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [9435ffa04278614f2456d844afc7e20e64e990e0d7512820da71072053d23861] <==
	* W1127 23:43:55.085558       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1127 23:43:55.101147       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1127 23:43:55.101209       1 server_others.go:186] Using iptables Proxier.
	I1127 23:43:55.101620       1 server.go:583] Version: v1.18.20
	I1127 23:43:55.107390       1 config.go:315] Starting service config controller
	I1127 23:43:55.111869       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1127 23:43:55.107563       1 config.go:133] Starting endpoints config controller
	I1127 23:43:55.115987       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1127 23:43:55.116061       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1127 23:43:55.212078       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [6c68aad4122391a83e21c72c13ac4c61b482c428beeb15614fdb76539dc477fb] <==
	* W1127 23:43:34.828077       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1127 23:43:34.858166       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1127 23:43:34.858258       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1127 23:43:34.861316       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1127 23:43:34.862000       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1127 23:43:34.862133       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1127 23:43:34.862020       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1127 23:43:34.871201       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1127 23:43:34.872026       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1127 23:43:34.872152       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1127 23:43:34.872262       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1127 23:43:34.872500       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1127 23:43:34.872602       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1127 23:43:34.872709       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1127 23:43:34.872816       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1127 23:43:34.872940       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1127 23:43:34.873093       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1127 23:43:34.873348       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1127 23:43:34.879452       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1127 23:43:35.801624       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1127 23:43:35.875780       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1127 23:43:35.891892       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1127 23:43:37.462333       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1127 23:43:54.569802       1 factory.go:503] pod kube-system/coredns-66bff467f8-rbcrv is already present in the backoff queue
	E1127 23:43:54.705030       1 factory.go:503] pod: kube-system/storage-provisioner is already present in unschedulable queue
	
	* 
	* ==> kubelet <==
	* Nov 27 23:47:01 ingress-addon-legacy-684553 kubelet[1643]: I1127 23:47:01.219405    1643 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 1638cf1bad1effb34e49b07330240ae02a6d6b49bbc65f4c9b26d13106a1012e
	Nov 27 23:47:01 ingress-addon-legacy-684553 kubelet[1643]: I1127 23:47:01.219531    1643 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: fce98dda2ddd20b643bdc8aa3adc13942805138d78bbde3946f87ded2eb70c06
	Nov 27 23:47:01 ingress-addon-legacy-684553 kubelet[1643]: E1127 23:47:01.219766    1643 pod_workers.go:191] Error syncing pod 8810c9af-9eb4-4cf4-a108-32ab0e8337c8 ("hello-world-app-5f5d8b66bb-zwrwz_default(8810c9af-9eb4-4cf4-a108-32ab0e8337c8)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-zwrwz_default(8810c9af-9eb4-4cf4-a108-32ab0e8337c8)"
	Nov 27 23:47:02 ingress-addon-legacy-684553 kubelet[1643]: I1127 23:47:02.222036    1643 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: fce98dda2ddd20b643bdc8aa3adc13942805138d78bbde3946f87ded2eb70c06
	Nov 27 23:47:02 ingress-addon-legacy-684553 kubelet[1643]: E1127 23:47:02.222294    1643 pod_workers.go:191] Error syncing pod 8810c9af-9eb4-4cf4-a108-32ab0e8337c8 ("hello-world-app-5f5d8b66bb-zwrwz_default(8810c9af-9eb4-4cf4-a108-32ab0e8337c8)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-zwrwz_default(8810c9af-9eb4-4cf4-a108-32ab0e8337c8)"
	Nov 27 23:47:09 ingress-addon-legacy-684553 kubelet[1643]: E1127 23:47:09.255652    1643 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 27 23:47:09 ingress-addon-legacy-684553 kubelet[1643]: E1127 23:47:09.255687    1643 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 27 23:47:09 ingress-addon-legacy-684553 kubelet[1643]: E1127 23:47:09.255726    1643 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 27 23:47:09 ingress-addon-legacy-684553 kubelet[1643]: E1127 23:47:09.255756    1643 pod_workers.go:191] Error syncing pod 39f96ff9-b5eb-4d36-b7ef-af91537a2bf8 ("kube-ingress-dns-minikube_kube-system(39f96ff9-b5eb-4d36-b7ef-af91537a2bf8)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Nov 27 23:47:11 ingress-addon-legacy-684553 kubelet[1643]: I1127 23:47:11.897545    1643 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-ls4vx" (UniqueName: "kubernetes.io/secret/39f96ff9-b5eb-4d36-b7ef-af91537a2bf8-minikube-ingress-dns-token-ls4vx") pod "39f96ff9-b5eb-4d36-b7ef-af91537a2bf8" (UID: "39f96ff9-b5eb-4d36-b7ef-af91537a2bf8")
	Nov 27 23:47:11 ingress-addon-legacy-684553 kubelet[1643]: I1127 23:47:11.902207    1643 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f96ff9-b5eb-4d36-b7ef-af91537a2bf8-minikube-ingress-dns-token-ls4vx" (OuterVolumeSpecName: "minikube-ingress-dns-token-ls4vx") pod "39f96ff9-b5eb-4d36-b7ef-af91537a2bf8" (UID: "39f96ff9-b5eb-4d36-b7ef-af91537a2bf8"). InnerVolumeSpecName "minikube-ingress-dns-token-ls4vx". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 27 23:47:12 ingress-addon-legacy-684553 kubelet[1643]: I1127 23:47:11.998010    1643 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-ls4vx" (UniqueName: "kubernetes.io/secret/39f96ff9-b5eb-4d36-b7ef-af91537a2bf8-minikube-ingress-dns-token-ls4vx") on node "ingress-addon-legacy-684553" DevicePath ""
	Nov 27 23:47:14 ingress-addon-legacy-684553 kubelet[1643]: I1127 23:47:14.254692    1643 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: fce98dda2ddd20b643bdc8aa3adc13942805138d78bbde3946f87ded2eb70c06
	Nov 27 23:47:15 ingress-addon-legacy-684553 kubelet[1643]: E1127 23:47:15.219645    1643 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-f6htr.179b9fa9a4429cc9", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-f6htr", UID:"d136fa23-4e4c-4427-ac14-054177890409", APIVersion:"v1", ResourceVersion:"467", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-684553"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1516880cce93ec9, ext:217544921902, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1516880cce93ec9, ext:217544921902, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-f6htr.179b9fa9a4429cc9" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 27 23:47:15 ingress-addon-legacy-684553 kubelet[1643]: E1127 23:47:15.226999    1643 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-f6htr.179b9fa9a4429cc9", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-f6htr", UID:"d136fa23-4e4c-4427-ac14-054177890409", APIVersion:"v1", ResourceVersion:"467", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-684553"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1516880cce93ec9, ext:217544921902, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1516880cd4b06e9, ext:217551330134, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-f6htr.179b9fa9a4429cc9" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 27 23:47:15 ingress-addon-legacy-684553 kubelet[1643]: I1127 23:47:15.241538    1643 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: fce98dda2ddd20b643bdc8aa3adc13942805138d78bbde3946f87ded2eb70c06
	Nov 27 23:47:15 ingress-addon-legacy-684553 kubelet[1643]: I1127 23:47:15.241786    1643 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 2becc930340a702319647145ab43d0e696f07602cff3adcda3ebf8915ab91d37
	Nov 27 23:47:15 ingress-addon-legacy-684553 kubelet[1643]: E1127 23:47:15.242056    1643 pod_workers.go:191] Error syncing pod 8810c9af-9eb4-4cf4-a108-32ab0e8337c8 ("hello-world-app-5f5d8b66bb-zwrwz_default(8810c9af-9eb4-4cf4-a108-32ab0e8337c8)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-zwrwz_default(8810c9af-9eb4-4cf4-a108-32ab0e8337c8)"
	Nov 27 23:47:18 ingress-addon-legacy-684553 kubelet[1643]: W1127 23:47:18.247772    1643 pod_container_deletor.go:77] Container "a18c4d2ac6f90ac62c48d1bff265dc8ecccf67664b7b4d2518c79610ff5214c4" not found in pod's containers
	Nov 27 23:47:19 ingress-addon-legacy-684553 kubelet[1643]: I1127 23:47:19.318973    1643 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d136fa23-4e4c-4427-ac14-054177890409-webhook-cert") pod "d136fa23-4e4c-4427-ac14-054177890409" (UID: "d136fa23-4e4c-4427-ac14-054177890409")
	Nov 27 23:47:19 ingress-addon-legacy-684553 kubelet[1643]: I1127 23:47:19.319045    1643 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-fzqj9" (UniqueName: "kubernetes.io/secret/d136fa23-4e4c-4427-ac14-054177890409-ingress-nginx-token-fzqj9") pod "d136fa23-4e4c-4427-ac14-054177890409" (UID: "d136fa23-4e4c-4427-ac14-054177890409")
	Nov 27 23:47:19 ingress-addon-legacy-684553 kubelet[1643]: I1127 23:47:19.326086    1643 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d136fa23-4e4c-4427-ac14-054177890409-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d136fa23-4e4c-4427-ac14-054177890409" (UID: "d136fa23-4e4c-4427-ac14-054177890409"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 27 23:47:19 ingress-addon-legacy-684553 kubelet[1643]: I1127 23:47:19.326536    1643 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d136fa23-4e4c-4427-ac14-054177890409-ingress-nginx-token-fzqj9" (OuterVolumeSpecName: "ingress-nginx-token-fzqj9") pod "d136fa23-4e4c-4427-ac14-054177890409" (UID: "d136fa23-4e4c-4427-ac14-054177890409"). InnerVolumeSpecName "ingress-nginx-token-fzqj9". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 27 23:47:19 ingress-addon-legacy-684553 kubelet[1643]: I1127 23:47:19.419370    1643 reconciler.go:319] Volume detached for volume "ingress-nginx-token-fzqj9" (UniqueName: "kubernetes.io/secret/d136fa23-4e4c-4427-ac14-054177890409-ingress-nginx-token-fzqj9") on node "ingress-addon-legacy-684553" DevicePath ""
	Nov 27 23:47:19 ingress-addon-legacy-684553 kubelet[1643]: I1127 23:47:19.419442    1643 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d136fa23-4e4c-4427-ac14-054177890409-webhook-cert") on node "ingress-addon-legacy-684553" DevicePath ""
	
	* 
	* ==> storage-provisioner [3402b457aff015d09db9e7f1c78263a6daef77319f6b26fae76110a9d326001e] <==
	* I1127 23:44:06.731449       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1127 23:44:06.753455       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1127 23:44:06.753552       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1127 23:44:06.770124       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1127 23:44:06.770311       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-684553_e346f276-4b13-41f6-a4ee-f812cc234fa5!
	I1127 23:44:06.770993       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fb3450f8-89ca-411d-93ce-4929123c3a6a", APIVersion:"v1", ResourceVersion:"413", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-684553_e346f276-4b13-41f6-a4ee-f812cc234fa5 became leader
	I1127 23:44:06.870896       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-684553_e346f276-4b13-41f6-a4ee-f812cc234fa5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-684553 -n ingress-addon-legacy-684553
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-684553 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (181.13s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-784312 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-784312 -- exec busybox-5bc68d56bd-cls7b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-784312 -- exec busybox-5bc68d56bd-cls7b -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-784312 -- exec busybox-5bc68d56bd-cls7b -- sh -c "ping -c 1 192.168.58.1": exit status 1 (249.962746ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-cls7b): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-784312 -- exec busybox-5bc68d56bd-dmvq4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-784312 -- exec busybox-5bc68d56bd-dmvq4 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-784312 -- exec busybox-5bc68d56bd-dmvq4 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (262.781471ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-dmvq4): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-784312
helpers_test.go:235: (dbg) docker inspect multinode-784312:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6fd8b66557924c6a255350f8721ffd2110fc0027983ca6688ef0474215f93244",
	        "Created": "2023-11-27T23:53:57.220074108Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1526027,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-27T23:53:57.562675476Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6b2707af759835616662ab4511fa7cfc968ed5500b8f30a5d231d9af64582310",
	        "ResolvConfPath": "/var/lib/docker/containers/6fd8b66557924c6a255350f8721ffd2110fc0027983ca6688ef0474215f93244/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6fd8b66557924c6a255350f8721ffd2110fc0027983ca6688ef0474215f93244/hostname",
	        "HostsPath": "/var/lib/docker/containers/6fd8b66557924c6a255350f8721ffd2110fc0027983ca6688ef0474215f93244/hosts",
	        "LogPath": "/var/lib/docker/containers/6fd8b66557924c6a255350f8721ffd2110fc0027983ca6688ef0474215f93244/6fd8b66557924c6a255350f8721ffd2110fc0027983ca6688ef0474215f93244-json.log",
	        "Name": "/multinode-784312",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-784312:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-784312",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6193366be7a435b534278fa21ab18f2167fc193b9144938e6c48feaaea65da69-init/diff:/var/lib/docker/overlay2/66e18f6b92e8847ad9065a2bde54888b27c493e8cb472385d095e2aee2f57672/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6193366be7a435b534278fa21ab18f2167fc193b9144938e6c48feaaea65da69/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6193366be7a435b534278fa21ab18f2167fc193b9144938e6c48feaaea65da69/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6193366be7a435b534278fa21ab18f2167fc193b9144938e6c48feaaea65da69/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-784312",
	                "Source": "/var/lib/docker/volumes/multinode-784312/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-784312",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-784312",
	                "name.minikube.sigs.k8s.io": "multinode-784312",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f04b0fe344a722ac76f82ceb829adb687a913f2df2143e62f2592c853624ae88",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34144"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34143"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34140"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34142"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34141"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f04b0fe344a7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-784312": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6fd8b6655792",
	                        "multinode-784312"
	                    ],
	                    "NetworkID": "4a3aa0fb5c5a72c62303406f21afd91c49bcd3a059bb219bb75d0e5ce312c2ae",
	                    "EndpointID": "90e23305274e2310cf7e21e185287c91c1bdece1e3372d4030ed87e5c9670829",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-784312 -n multinode-784312
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-784312 logs -n 25: (1.597420936s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-129599                           | mount-start-2-129599 | jenkins | v1.32.0 | 27 Nov 23 23:53 UTC | 27 Nov 23 23:53 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-129599 ssh -- ls                    | mount-start-2-129599 | jenkins | v1.32.0 | 27 Nov 23 23:53 UTC | 27 Nov 23 23:53 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-127843                           | mount-start-1-127843 | jenkins | v1.32.0 | 27 Nov 23 23:53 UTC | 27 Nov 23 23:53 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-129599 ssh -- ls                    | mount-start-2-129599 | jenkins | v1.32.0 | 27 Nov 23 23:53 UTC | 27 Nov 23 23:53 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-129599                           | mount-start-2-129599 | jenkins | v1.32.0 | 27 Nov 23 23:53 UTC | 27 Nov 23 23:53 UTC |
	| start   | -p mount-start-2-129599                           | mount-start-2-129599 | jenkins | v1.32.0 | 27 Nov 23 23:53 UTC | 27 Nov 23 23:53 UTC |
	| ssh     | mount-start-2-129599 ssh -- ls                    | mount-start-2-129599 | jenkins | v1.32.0 | 27 Nov 23 23:53 UTC | 27 Nov 23 23:53 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-129599                           | mount-start-2-129599 | jenkins | v1.32.0 | 27 Nov 23 23:53 UTC | 27 Nov 23 23:53 UTC |
	| delete  | -p mount-start-1-127843                           | mount-start-1-127843 | jenkins | v1.32.0 | 27 Nov 23 23:53 UTC | 27 Nov 23 23:53 UTC |
	| start   | -p multinode-784312                               | multinode-784312     | jenkins | v1.32.0 | 27 Nov 23 23:53 UTC | 27 Nov 23 23:56 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-784312 -- apply -f                   | multinode-784312     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-784312 -- rollout                    | multinode-784312     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-784312 -- get pods -o                | multinode-784312     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-784312 -- get pods -o                | multinode-784312     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-784312 -- exec                       | multinode-784312     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | busybox-5bc68d56bd-cls7b --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-784312 -- exec                       | multinode-784312     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | busybox-5bc68d56bd-dmvq4 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-784312 -- exec                       | multinode-784312     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | busybox-5bc68d56bd-cls7b --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-784312 -- exec                       | multinode-784312     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | busybox-5bc68d56bd-dmvq4 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-784312 -- exec                       | multinode-784312     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | busybox-5bc68d56bd-cls7b -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-784312 -- exec                       | multinode-784312     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | busybox-5bc68d56bd-dmvq4 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-784312 -- get pods -o                | multinode-784312     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-784312 -- exec                       | multinode-784312     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | busybox-5bc68d56bd-cls7b                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-784312 -- exec                       | multinode-784312     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC |                     |
	|         | busybox-5bc68d56bd-cls7b -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-784312 -- exec                       | multinode-784312     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | busybox-5bc68d56bd-dmvq4                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-784312 -- exec                       | multinode-784312     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC |                     |
	|         | busybox-5bc68d56bd-dmvq4 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 23:53:51
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 23:53:51.695858 1525568 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:53:51.696062 1525568 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:53:51.696072 1525568 out.go:309] Setting ErrFile to fd 2...
	I1127 23:53:51.696079 1525568 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:53:51.696340 1525568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-1455288/.minikube/bin
	I1127 23:53:51.696764 1525568 out.go:303] Setting JSON to false
	I1127 23:53:51.697696 1525568 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23781,"bootTime":1701105451,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1127 23:53:51.697771 1525568 start.go:138] virtualization:  
	I1127 23:53:51.700570 1525568 out.go:177] * [multinode-784312] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1127 23:53:51.702729 1525568 out.go:177]   - MINIKUBE_LOCATION=17206
	I1127 23:53:51.704360 1525568 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:53:51.702894 1525568 notify.go:220] Checking for updates...
	I1127 23:53:51.707711 1525568 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-1455288/kubeconfig
	I1127 23:53:51.709231 1525568 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube
	I1127 23:53:51.710947 1525568 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1127 23:53:51.712473 1525568 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 23:53:51.714210 1525568 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:53:51.741313 1525568 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:53:51.741438 1525568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:53:51.824002 1525568 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-27 23:53:51.814137835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:53:51.824110 1525568 docker.go:295] overlay module found
	I1127 23:53:51.828357 1525568 out.go:177] * Using the docker driver based on user configuration
	I1127 23:53:51.830500 1525568 start.go:298] selected driver: docker
	I1127 23:53:51.830517 1525568 start.go:902] validating driver "docker" against <nil>
	I1127 23:53:51.830530 1525568 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 23:53:51.831174 1525568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:53:51.898310 1525568 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-27 23:53:51.888874349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:53:51.898474 1525568 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1127 23:53:51.898710 1525568 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1127 23:53:51.900786 1525568 out.go:177] * Using Docker driver with root privileges
	I1127 23:53:51.902414 1525568 cni.go:84] Creating CNI manager for ""
	I1127 23:53:51.902436 1525568 cni.go:136] 0 nodes found, recommending kindnet
	I1127 23:53:51.902445 1525568 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1127 23:53:51.902463 1525568 start_flags.go:323] config:
	{Name:multinode-784312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-784312 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:53:51.904524 1525568 out.go:177] * Starting control plane node multinode-784312 in cluster multinode-784312
	I1127 23:53:51.906132 1525568 cache.go:121] Beginning downloading kic base image for docker with crio
	I1127 23:53:51.907735 1525568 out.go:177] * Pulling base image ...
	I1127 23:53:51.909700 1525568 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:53:51.909742 1525568 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 23:53:51.909749 1525568 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1127 23:53:51.909842 1525568 cache.go:56] Caching tarball of preloaded images
	I1127 23:53:51.909929 1525568 preload.go:174] Found /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1127 23:53:51.909938 1525568 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1127 23:53:51.910303 1525568 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/config.json ...
	I1127 23:53:51.910332 1525568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/config.json: {Name:mkcc2f6f9dbb7663a86067f9848d635b39ef12b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:53:51.927286 1525568 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1127 23:53:51.927311 1525568 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1127 23:53:51.927332 1525568 cache.go:194] Successfully downloaded all kic artifacts
	I1127 23:53:51.927400 1525568 start.go:365] acquiring machines lock for multinode-784312: {Name:mkdf0670c2bafe5baa8f00d509f004a65436f011 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:53:51.927510 1525568 start.go:369] acquired machines lock for "multinode-784312" in 87.704µs
	I1127 23:53:51.927540 1525568 start.go:93] Provisioning new machine with config: &{Name:multinode-784312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-784312 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1127 23:53:51.927633 1525568 start.go:125] createHost starting for "" (driver="docker")
	I1127 23:53:51.930214 1525568 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1127 23:53:51.930496 1525568 start.go:159] libmachine.API.Create for "multinode-784312" (driver="docker")
	I1127 23:53:51.930530 1525568 client.go:168] LocalClient.Create starting
	I1127 23:53:51.930643 1525568 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem
	I1127 23:53:51.930681 1525568 main.go:141] libmachine: Decoding PEM data...
	I1127 23:53:51.930699 1525568 main.go:141] libmachine: Parsing certificate...
	I1127 23:53:51.930756 1525568 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/cert.pem
	I1127 23:53:51.930782 1525568 main.go:141] libmachine: Decoding PEM data...
	I1127 23:53:51.930799 1525568 main.go:141] libmachine: Parsing certificate...
	I1127 23:53:51.931197 1525568 cli_runner.go:164] Run: docker network inspect multinode-784312 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1127 23:53:51.948339 1525568 cli_runner.go:211] docker network inspect multinode-784312 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1127 23:53:51.948416 1525568 network_create.go:281] running [docker network inspect multinode-784312] to gather additional debugging logs...
	I1127 23:53:51.948437 1525568 cli_runner.go:164] Run: docker network inspect multinode-784312
	W1127 23:53:51.965613 1525568 cli_runner.go:211] docker network inspect multinode-784312 returned with exit code 1
	I1127 23:53:51.965647 1525568 network_create.go:284] error running [docker network inspect multinode-784312]: docker network inspect multinode-784312: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-784312 not found
	I1127 23:53:51.965660 1525568 network_create.go:286] output of [docker network inspect multinode-784312]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-784312 not found
	
	** /stderr **
	I1127 23:53:51.965764 1525568 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 23:53:51.983457 1525568 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dd6178619d28 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d1:b7:12:be} reservation:<nil>}
	I1127 23:53:51.983802 1525568 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024bff40}
	I1127 23:53:51.983824 1525568 network_create.go:124] attempt to create docker network multinode-784312 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1127 23:53:51.983888 1525568 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-784312 multinode-784312
	I1127 23:53:52.055879 1525568 network_create.go:108] docker network multinode-784312 192.168.58.0/24 created
	I1127 23:53:52.055915 1525568 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-784312" container
	I1127 23:53:52.056006 1525568 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1127 23:53:52.073525 1525568 cli_runner.go:164] Run: docker volume create multinode-784312 --label name.minikube.sigs.k8s.io=multinode-784312 --label created_by.minikube.sigs.k8s.io=true
	I1127 23:53:52.096556 1525568 oci.go:103] Successfully created a docker volume multinode-784312
	I1127 23:53:52.096651 1525568 cli_runner.go:164] Run: docker run --rm --name multinode-784312-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-784312 --entrypoint /usr/bin/test -v multinode-784312:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1127 23:53:52.733724 1525568 oci.go:107] Successfully prepared a docker volume multinode-784312
	I1127 23:53:52.733768 1525568 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:53:52.733789 1525568 kic.go:194] Starting extracting preloaded images to volume ...
	I1127 23:53:52.733878 1525568 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-784312:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1127 23:53:57.123577 1525568 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-784312:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir: (4.38964923s)
	I1127 23:53:57.123619 1525568 kic.go:203] duration metric: took 4.389827 seconds to extract preloaded images to volume
	W1127 23:53:57.123767 1525568 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1127 23:53:57.123875 1525568 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1127 23:53:57.204140 1525568 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-784312 --name multinode-784312 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-784312 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-784312 --network multinode-784312 --ip 192.168.58.2 --volume multinode-784312:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1127 23:53:57.570861 1525568 cli_runner.go:164] Run: docker container inspect multinode-784312 --format={{.State.Running}}
	I1127 23:53:57.593567 1525568 cli_runner.go:164] Run: docker container inspect multinode-784312 --format={{.State.Status}}
	I1127 23:53:57.623842 1525568 cli_runner.go:164] Run: docker exec multinode-784312 stat /var/lib/dpkg/alternatives/iptables
	I1127 23:53:57.679465 1525568 oci.go:144] the created container "multinode-784312" has a running status.
	I1127 23:53:57.679496 1525568 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/multinode-784312/id_rsa...
	I1127 23:53:59.051011 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/multinode-784312/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1127 23:53:59.051071 1525568 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/multinode-784312/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1127 23:53:59.072906 1525568 cli_runner.go:164] Run: docker container inspect multinode-784312 --format={{.State.Status}}
	I1127 23:53:59.090645 1525568 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1127 23:53:59.090670 1525568 kic_runner.go:114] Args: [docker exec --privileged multinode-784312 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1127 23:53:59.176019 1525568 cli_runner.go:164] Run: docker container inspect multinode-784312 --format={{.State.Status}}
	I1127 23:53:59.194255 1525568 machine.go:88] provisioning docker machine ...
	I1127 23:53:59.194289 1525568 ubuntu.go:169] provisioning hostname "multinode-784312"
	I1127 23:53:59.194355 1525568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-784312
	I1127 23:53:59.212372 1525568 main.go:141] libmachine: Using SSH client type: native
	I1127 23:53:59.212845 1525568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34144 <nil> <nil>}
	I1127 23:53:59.212870 1525568 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-784312 && echo "multinode-784312" | sudo tee /etc/hostname
	I1127 23:53:59.360682 1525568 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-784312
	
	I1127 23:53:59.360770 1525568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-784312
	I1127 23:53:59.378864 1525568 main.go:141] libmachine: Using SSH client type: native
	I1127 23:53:59.379285 1525568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34144 <nil> <nil>}
	I1127 23:53:59.379309 1525568 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-784312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-784312/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-784312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1127 23:53:59.507227 1525568 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 23:53:59.507251 1525568 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17206-1455288/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-1455288/.minikube}
	I1127 23:53:59.507269 1525568 ubuntu.go:177] setting up certificates
	I1127 23:53:59.507279 1525568 provision.go:83] configureAuth start
	I1127 23:53:59.507337 1525568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-784312
	I1127 23:53:59.525520 1525568 provision.go:138] copyHostCerts
	I1127 23:53:59.525558 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.pem
	I1127 23:53:59.525595 1525568 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.pem, removing ...
	I1127 23:53:59.525603 1525568 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.pem
	I1127 23:53:59.525683 1525568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.pem (1078 bytes)
	I1127 23:53:59.525766 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17206-1455288/.minikube/cert.pem
	I1127 23:53:59.525782 1525568 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-1455288/.minikube/cert.pem, removing ...
	I1127 23:53:59.525786 1525568 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-1455288/.minikube/cert.pem
	I1127 23:53:59.525812 1525568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-1455288/.minikube/cert.pem (1123 bytes)
	I1127 23:53:59.526048 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17206-1455288/.minikube/key.pem
	I1127 23:53:59.526082 1525568 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-1455288/.minikube/key.pem, removing ...
	I1127 23:53:59.526088 1525568 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-1455288/.minikube/key.pem
	I1127 23:53:59.526123 1525568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-1455288/.minikube/key.pem (1679 bytes)
	I1127 23:53:59.526188 1525568 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca-key.pem org=jenkins.multinode-784312 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-784312]
	I1127 23:54:00.439451 1525568 provision.go:172] copyRemoteCerts
	I1127 23:54:00.439564 1525568 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1127 23:54:00.439611 1525568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-784312
	I1127 23:54:00.460337 1525568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34144 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/multinode-784312/id_rsa Username:docker}
	I1127 23:54:00.561729 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1127 23:54:00.561795 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1127 23:54:00.592626 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1127 23:54:00.592688 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1127 23:54:00.622061 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1127 23:54:00.622133 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1127 23:54:00.651167 1525568 provision.go:86] duration metric: configureAuth took 1.143872967s
	I1127 23:54:00.651193 1525568 ubuntu.go:193] setting minikube options for container-runtime
	I1127 23:54:00.651418 1525568 config.go:182] Loaded profile config "multinode-784312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:54:00.651538 1525568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-784312
	I1127 23:54:00.670337 1525568 main.go:141] libmachine: Using SSH client type: native
	I1127 23:54:00.670768 1525568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34144 <nil> <nil>}
	I1127 23:54:00.670789 1525568 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1127 23:54:00.909264 1525568 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1127 23:54:00.909289 1525568 machine.go:91] provisioned docker machine in 1.715010081s
	I1127 23:54:00.909299 1525568 client.go:171] LocalClient.Create took 8.978763312s
	I1127 23:54:00.909318 1525568 start.go:167] duration metric: libmachine.API.Create for "multinode-784312" took 8.978823831s
	I1127 23:54:00.909326 1525568 start.go:300] post-start starting for "multinode-784312" (driver="docker")
	I1127 23:54:00.909346 1525568 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1127 23:54:00.909414 1525568 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1127 23:54:00.909464 1525568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-784312
	I1127 23:54:00.937203 1525568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34144 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/multinode-784312/id_rsa Username:docker}
	I1127 23:54:01.033059 1525568 ssh_runner.go:195] Run: cat /etc/os-release
	I1127 23:54:01.037447 1525568 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1127 23:54:01.037467 1525568 command_runner.go:130] > NAME="Ubuntu"
	I1127 23:54:01.037475 1525568 command_runner.go:130] > VERSION_ID="22.04"
	I1127 23:54:01.037484 1525568 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1127 23:54:01.037491 1525568 command_runner.go:130] > VERSION_CODENAME=jammy
	I1127 23:54:01.037495 1525568 command_runner.go:130] > ID=ubuntu
	I1127 23:54:01.037500 1525568 command_runner.go:130] > ID_LIKE=debian
	I1127 23:54:01.037506 1525568 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1127 23:54:01.037512 1525568 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1127 23:54:01.037520 1525568 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1127 23:54:01.037535 1525568 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1127 23:54:01.037543 1525568 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1127 23:54:01.037602 1525568 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1127 23:54:01.037628 1525568 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1127 23:54:01.037641 1525568 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1127 23:54:01.037649 1525568 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1127 23:54:01.037663 1525568 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-1455288/.minikube/addons for local assets ...
	I1127 23:54:01.037723 1525568 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-1455288/.minikube/files for local assets ...
	I1127 23:54:01.037823 1525568 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/14606522.pem -> 14606522.pem in /etc/ssl/certs
	I1127 23:54:01.037835 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/14606522.pem -> /etc/ssl/certs/14606522.pem
	I1127 23:54:01.037968 1525568 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1127 23:54:01.048798 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/14606522.pem --> /etc/ssl/certs/14606522.pem (1708 bytes)
	I1127 23:54:01.077952 1525568 start.go:303] post-start completed in 168.611562ms
	I1127 23:54:01.078380 1525568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-784312
	I1127 23:54:01.097182 1525568 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/config.json ...
	I1127 23:54:01.097470 1525568 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 23:54:01.097524 1525568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-784312
	I1127 23:54:01.115506 1525568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34144 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/multinode-784312/id_rsa Username:docker}
	I1127 23:54:01.208573 1525568 command_runner.go:130] > 17%!
	(MISSING)I1127 23:54:01.208740 1525568 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1127 23:54:01.215040 1525568 command_runner.go:130] > 161G
	I1127 23:54:01.215501 1525568 start.go:128] duration metric: createHost completed in 9.287854815s
	I1127 23:54:01.215524 1525568 start.go:83] releasing machines lock for "multinode-784312", held for 9.288000134s
	I1127 23:54:01.215633 1525568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-784312
	I1127 23:54:01.236318 1525568 ssh_runner.go:195] Run: cat /version.json
	I1127 23:54:01.236393 1525568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-784312
	I1127 23:54:01.236650 1525568 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1127 23:54:01.236708 1525568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-784312
	I1127 23:54:01.258169 1525568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34144 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/multinode-784312/id_rsa Username:docker}
	I1127 23:54:01.278038 1525568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34144 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/multinode-784312/id_rsa Username:docker}
	I1127 23:54:01.354302 1525568 command_runner.go:130] > {"iso_version": "v1.32.1-1699648094-17581", "kicbase_version": "v0.0.42-1700142204-17634", "minikube_version": "v1.32.0", "commit": "6532cab52e164d1138ecb8469e77a57a00b45825"}
	I1127 23:54:01.354450 1525568 ssh_runner.go:195] Run: systemctl --version
	I1127 23:54:01.484191 1525568 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1127 23:54:01.487244 1525568 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I1127 23:54:01.487284 1525568 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1127 23:54:01.487347 1525568 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1127 23:54:01.634428 1525568 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1127 23:54:01.639795 1525568 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1127 23:54:01.639818 1525568 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1127 23:54:01.639825 1525568 command_runner.go:130] > Device: 36h/54d	Inode: 5708571     Links: 1
	I1127 23:54:01.639834 1525568 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1127 23:54:01.639841 1525568 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1127 23:54:01.639848 1525568 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1127 23:54:01.639854 1525568 command_runner.go:130] > Change: 2023-11-27 23:30:31.978009364 +0000
	I1127 23:54:01.639860 1525568 command_runner.go:130] >  Birth: 2023-11-27 23:30:31.978009364 +0000
	I1127 23:54:01.640249 1525568 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 23:54:01.667581 1525568 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1127 23:54:01.667662 1525568 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 23:54:01.710361 1525568 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1127 23:54:01.710409 1525568 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1127 23:54:01.710418 1525568 start.go:472] detecting cgroup driver to use...
	I1127 23:54:01.710452 1525568 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1127 23:54:01.710514 1525568 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1127 23:54:01.730724 1525568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1127 23:54:01.745724 1525568 docker.go:203] disabling cri-docker service (if available) ...
	I1127 23:54:01.745793 1525568 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1127 23:54:01.763759 1525568 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1127 23:54:01.781168 1525568 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1127 23:54:01.882648 1525568 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1127 23:54:01.992000 1525568 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1127 23:54:01.992028 1525568 docker.go:219] disabling docker service ...
	I1127 23:54:01.992094 1525568 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1127 23:54:02.017729 1525568 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1127 23:54:02.032626 1525568 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1127 23:54:02.133527 1525568 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1127 23:54:02.133611 1525568 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1127 23:54:02.147373 1525568 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1127 23:54:02.245729 1525568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1127 23:54:02.259031 1525568 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 23:54:02.278864 1525568 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1127 23:54:02.280692 1525568 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1127 23:54:02.280769 1525568 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:54:02.293912 1525568 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1127 23:54:02.293979 1525568 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:54:02.306022 1525568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:54:02.317978 1525568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:54:02.329825 1525568 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1127 23:54:02.341284 1525568 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1127 23:54:02.350588 1525568 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1127 23:54:02.351896 1525568 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1127 23:54:02.362280 1525568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 23:54:02.460275 1525568 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1127 23:54:02.584670 1525568 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1127 23:54:02.584774 1525568 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1127 23:54:02.590076 1525568 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1127 23:54:02.590155 1525568 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1127 23:54:02.590177 1525568 command_runner.go:130] > Device: 43h/67d	Inode: 190         Links: 1
	I1127 23:54:02.590212 1525568 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1127 23:54:02.590235 1525568 command_runner.go:130] > Access: 2023-11-27 23:54:02.571601809 +0000
	I1127 23:54:02.590254 1525568 command_runner.go:130] > Modify: 2023-11-27 23:54:02.571601809 +0000
	I1127 23:54:02.590275 1525568 command_runner.go:130] > Change: 2023-11-27 23:54:02.571601809 +0000
	I1127 23:54:02.590291 1525568 command_runner.go:130] >  Birth: -
	I1127 23:54:02.590429 1525568 start.go:540] Will wait 60s for crictl version
	I1127 23:54:02.590516 1525568 ssh_runner.go:195] Run: which crictl
	I1127 23:54:02.594833 1525568 command_runner.go:130] > /usr/bin/crictl
	I1127 23:54:02.595136 1525568 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1127 23:54:02.636950 1525568 command_runner.go:130] > Version:  0.1.0
	I1127 23:54:02.637237 1525568 command_runner.go:130] > RuntimeName:  cri-o
	I1127 23:54:02.637486 1525568 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1127 23:54:02.637726 1525568 command_runner.go:130] > RuntimeApiVersion:  v1
	I1127 23:54:02.640338 1525568 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1127 23:54:02.640478 1525568 ssh_runner.go:195] Run: crio --version
	I1127 23:54:02.681363 1525568 command_runner.go:130] > crio version 1.24.6
	I1127 23:54:02.681425 1525568 command_runner.go:130] > Version:          1.24.6
	I1127 23:54:02.681448 1525568 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1127 23:54:02.681467 1525568 command_runner.go:130] > GitTreeState:     clean
	I1127 23:54:02.681500 1525568 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1127 23:54:02.681609 1525568 command_runner.go:130] > GoVersion:        go1.18.2
	I1127 23:54:02.686805 1525568 command_runner.go:130] > Compiler:         gc
	I1127 23:54:02.686909 1525568 command_runner.go:130] > Platform:         linux/arm64
	I1127 23:54:02.686927 1525568 command_runner.go:130] > Linkmode:         dynamic
	I1127 23:54:02.686938 1525568 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1127 23:54:02.686950 1525568 command_runner.go:130] > SeccompEnabled:   true
	I1127 23:54:02.686956 1525568 command_runner.go:130] > AppArmorEnabled:  false
	I1127 23:54:02.688810 1525568 ssh_runner.go:195] Run: crio --version
	I1127 23:54:02.739424 1525568 command_runner.go:130] > crio version 1.24.6
	I1127 23:54:02.739492 1525568 command_runner.go:130] > Version:          1.24.6
	I1127 23:54:02.739513 1525568 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1127 23:54:02.739532 1525568 command_runner.go:130] > GitTreeState:     clean
	I1127 23:54:02.739551 1525568 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1127 23:54:02.739587 1525568 command_runner.go:130] > GoVersion:        go1.18.2
	I1127 23:54:02.739604 1525568 command_runner.go:130] > Compiler:         gc
	I1127 23:54:02.739623 1525568 command_runner.go:130] > Platform:         linux/arm64
	I1127 23:54:02.739644 1525568 command_runner.go:130] > Linkmode:         dynamic
	I1127 23:54:02.739673 1525568 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1127 23:54:02.739695 1525568 command_runner.go:130] > SeccompEnabled:   true
	I1127 23:54:02.739712 1525568 command_runner.go:130] > AppArmorEnabled:  false
	I1127 23:54:02.743502 1525568 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1127 23:54:02.744987 1525568 cli_runner.go:164] Run: docker network inspect multinode-784312 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 23:54:02.762204 1525568 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1127 23:54:02.766816 1525568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:54:02.780050 1525568 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:54:02.780115 1525568 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 23:54:02.843032 1525568 command_runner.go:130] > {
	I1127 23:54:02.843052 1525568 command_runner.go:130] >   "images": [
	I1127 23:54:02.843057 1525568 command_runner.go:130] >     {
	I1127 23:54:02.843066 1525568 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1127 23:54:02.843074 1525568 command_runner.go:130] >       "repoTags": [
	I1127 23:54:02.843083 1525568 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1127 23:54:02.843089 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.843094 1525568 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:02.843105 1525568 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1127 23:54:02.843118 1525568 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1127 23:54:02.843136 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.843143 1525568 command_runner.go:130] >       "size": "60867618",
	I1127 23:54:02.843150 1525568 command_runner.go:130] >       "uid": null,
	I1127 23:54:02.843158 1525568 command_runner.go:130] >       "username": "",
	I1127 23:54:02.843167 1525568 command_runner.go:130] >       "spec": null,
	I1127 23:54:02.843174 1525568 command_runner.go:130] >       "pinned": false
	I1127 23:54:02.843179 1525568 command_runner.go:130] >     },
	I1127 23:54:02.843183 1525568 command_runner.go:130] >     {
	I1127 23:54:02.843193 1525568 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1127 23:54:02.843200 1525568 command_runner.go:130] >       "repoTags": [
	I1127 23:54:02.843207 1525568 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1127 23:54:02.843212 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.843217 1525568 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:02.843233 1525568 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1127 23:54:02.843247 1525568 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1127 23:54:02.843252 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.843261 1525568 command_runner.go:130] >       "size": "29037500",
	I1127 23:54:02.843268 1525568 command_runner.go:130] >       "uid": null,
	I1127 23:54:02.843273 1525568 command_runner.go:130] >       "username": "",
	I1127 23:54:02.843279 1525568 command_runner.go:130] >       "spec": null,
	I1127 23:54:02.843286 1525568 command_runner.go:130] >       "pinned": false
	I1127 23:54:02.843290 1525568 command_runner.go:130] >     },
	I1127 23:54:02.843295 1525568 command_runner.go:130] >     {
	I1127 23:54:02.843305 1525568 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1127 23:54:02.843310 1525568 command_runner.go:130] >       "repoTags": [
	I1127 23:54:02.843316 1525568 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1127 23:54:02.843320 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.843328 1525568 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:02.843339 1525568 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1127 23:54:02.843351 1525568 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1127 23:54:02.843356 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.843361 1525568 command_runner.go:130] >       "size": "51393451",
	I1127 23:54:02.843368 1525568 command_runner.go:130] >       "uid": null,
	I1127 23:54:02.843374 1525568 command_runner.go:130] >       "username": "",
	I1127 23:54:02.843379 1525568 command_runner.go:130] >       "spec": null,
	I1127 23:54:02.843386 1525568 command_runner.go:130] >       "pinned": false
	I1127 23:54:02.843391 1525568 command_runner.go:130] >     },
	I1127 23:54:02.843395 1525568 command_runner.go:130] >     {
	I1127 23:54:02.843403 1525568 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1127 23:54:02.843413 1525568 command_runner.go:130] >       "repoTags": [
	I1127 23:54:02.843419 1525568 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1127 23:54:02.843424 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.843429 1525568 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:02.843440 1525568 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1127 23:54:02.843453 1525568 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1127 23:54:02.843462 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.843471 1525568 command_runner.go:130] >       "size": "182203183",
	I1127 23:54:02.843476 1525568 command_runner.go:130] >       "uid": {
	I1127 23:54:02.843481 1525568 command_runner.go:130] >         "value": "0"
	I1127 23:54:02.843488 1525568 command_runner.go:130] >       },
	I1127 23:54:02.843493 1525568 command_runner.go:130] >       "username": "",
	I1127 23:54:02.843498 1525568 command_runner.go:130] >       "spec": null,
	I1127 23:54:02.843505 1525568 command_runner.go:130] >       "pinned": false
	I1127 23:54:02.843512 1525568 command_runner.go:130] >     },
	I1127 23:54:02.843516 1525568 command_runner.go:130] >     {
	I1127 23:54:02.843526 1525568 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I1127 23:54:02.843532 1525568 command_runner.go:130] >       "repoTags": [
	I1127 23:54:02.843538 1525568 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1127 23:54:02.843545 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.843550 1525568 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:02.843560 1525568 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I1127 23:54:02.843571 1525568 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I1127 23:54:02.843575 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.843581 1525568 command_runner.go:130] >       "size": "121119694",
	I1127 23:54:02.843586 1525568 command_runner.go:130] >       "uid": {
	I1127 23:54:02.843593 1525568 command_runner.go:130] >         "value": "0"
	I1127 23:54:02.843600 1525568 command_runner.go:130] >       },
	I1127 23:54:02.843606 1525568 command_runner.go:130] >       "username": "",
	I1127 23:54:02.843613 1525568 command_runner.go:130] >       "spec": null,
	I1127 23:54:02.843618 1525568 command_runner.go:130] >       "pinned": false
	I1127 23:54:02.843622 1525568 command_runner.go:130] >     },
	I1127 23:54:02.843627 1525568 command_runner.go:130] >     {
	I1127 23:54:02.843636 1525568 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I1127 23:54:02.843644 1525568 command_runner.go:130] >       "repoTags": [
	I1127 23:54:02.843650 1525568 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1127 23:54:02.843655 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.843660 1525568 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:02.843669 1525568 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1127 23:54:02.843682 1525568 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I1127 23:54:02.843686 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.843691 1525568 command_runner.go:130] >       "size": "117252916",
	I1127 23:54:02.843699 1525568 command_runner.go:130] >       "uid": {
	I1127 23:54:02.843705 1525568 command_runner.go:130] >         "value": "0"
	I1127 23:54:02.843711 1525568 command_runner.go:130] >       },
	I1127 23:54:02.843717 1525568 command_runner.go:130] >       "username": "",
	I1127 23:54:02.843722 1525568 command_runner.go:130] >       "spec": null,
	I1127 23:54:02.843729 1525568 command_runner.go:130] >       "pinned": false
	I1127 23:54:02.843733 1525568 command_runner.go:130] >     },
	I1127 23:54:02.843738 1525568 command_runner.go:130] >     {
	I1127 23:54:02.843746 1525568 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I1127 23:54:02.843753 1525568 command_runner.go:130] >       "repoTags": [
	I1127 23:54:02.843762 1525568 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1127 23:54:02.843767 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.843774 1525568 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:02.843784 1525568 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I1127 23:54:02.843793 1525568 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1127 23:54:02.843800 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.843805 1525568 command_runner.go:130] >       "size": "69992343",
	I1127 23:54:02.843811 1525568 command_runner.go:130] >       "uid": null,
	I1127 23:54:02.843816 1525568 command_runner.go:130] >       "username": "",
	I1127 23:54:02.843824 1525568 command_runner.go:130] >       "spec": null,
	I1127 23:54:02.843831 1525568 command_runner.go:130] >       "pinned": false
	I1127 23:54:02.843837 1525568 command_runner.go:130] >     },
	I1127 23:54:02.843844 1525568 command_runner.go:130] >     {
	I1127 23:54:02.843852 1525568 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I1127 23:54:02.843859 1525568 command_runner.go:130] >       "repoTags": [
	I1127 23:54:02.843865 1525568 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1127 23:54:02.843870 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.843875 1525568 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:02.843919 1525568 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1127 23:54:02.843932 1525568 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I1127 23:54:02.843937 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.843942 1525568 command_runner.go:130] >       "size": "59253556",
	I1127 23:54:02.843946 1525568 command_runner.go:130] >       "uid": {
	I1127 23:54:02.843952 1525568 command_runner.go:130] >         "value": "0"
	I1127 23:54:02.843956 1525568 command_runner.go:130] >       },
	I1127 23:54:02.843961 1525568 command_runner.go:130] >       "username": "",
	I1127 23:54:02.843966 1525568 command_runner.go:130] >       "spec": null,
	I1127 23:54:02.843971 1525568 command_runner.go:130] >       "pinned": false
	I1127 23:54:02.843975 1525568 command_runner.go:130] >     },
	I1127 23:54:02.843979 1525568 command_runner.go:130] >     {
	I1127 23:54:02.843992 1525568 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1127 23:54:02.843999 1525568 command_runner.go:130] >       "repoTags": [
	I1127 23:54:02.844005 1525568 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1127 23:54:02.844010 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.844015 1525568 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:02.844027 1525568 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1127 23:54:02.844041 1525568 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1127 23:54:02.844049 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.844055 1525568 command_runner.go:130] >       "size": "520014",
	I1127 23:54:02.844059 1525568 command_runner.go:130] >       "uid": {
	I1127 23:54:02.844067 1525568 command_runner.go:130] >         "value": "65535"
	I1127 23:54:02.844074 1525568 command_runner.go:130] >       },
	I1127 23:54:02.844079 1525568 command_runner.go:130] >       "username": "",
	I1127 23:54:02.844089 1525568 command_runner.go:130] >       "spec": null,
	I1127 23:54:02.844095 1525568 command_runner.go:130] >       "pinned": false
	I1127 23:54:02.844099 1525568 command_runner.go:130] >     }
	I1127 23:54:02.844106 1525568 command_runner.go:130] >   ]
	I1127 23:54:02.844110 1525568 command_runner.go:130] > }
	I1127 23:54:02.846824 1525568 crio.go:496] all images are preloaded for cri-o runtime.
	I1127 23:54:02.846848 1525568 crio.go:415] Images already preloaded, skipping extraction
	I1127 23:54:02.846914 1525568 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 23:54:02.882651 1525568 command_runner.go:130] > {
	I1127 23:54:02.882670 1525568 command_runner.go:130] >   "images": [
	I1127 23:54:02.882675 1525568 command_runner.go:130] >     {
	I1127 23:54:02.882686 1525568 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1127 23:54:02.882691 1525568 command_runner.go:130] >       "repoTags": [
	I1127 23:54:02.882700 1525568 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1127 23:54:02.882705 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.882710 1525568 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:02.882720 1525568 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1127 23:54:02.882730 1525568 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1127 23:54:02.882734 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.882740 1525568 command_runner.go:130] >       "size": "60867618",
	I1127 23:54:02.882746 1525568 command_runner.go:130] >       "uid": null,
	I1127 23:54:02.882751 1525568 command_runner.go:130] >       "username": "",
	I1127 23:54:02.882761 1525568 command_runner.go:130] >       "spec": null,
	I1127 23:54:02.882768 1525568 command_runner.go:130] >       "pinned": false
	I1127 23:54:02.882773 1525568 command_runner.go:130] >     },
	I1127 23:54:02.882777 1525568 command_runner.go:130] >     {
	I1127 23:54:02.882785 1525568 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1127 23:54:02.882790 1525568 command_runner.go:130] >       "repoTags": [
	I1127 23:54:02.882797 1525568 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1127 23:54:02.882801 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.882807 1525568 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:02.882816 1525568 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1127 23:54:02.882826 1525568 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1127 23:54:02.882830 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.882843 1525568 command_runner.go:130] >       "size": "29037500",
	I1127 23:54:02.882848 1525568 command_runner.go:130] >       "uid": null,
	I1127 23:54:02.882852 1525568 command_runner.go:130] >       "username": "",
	I1127 23:54:02.882857 1525568 command_runner.go:130] >       "spec": null,
	I1127 23:54:02.882862 1525568 command_runner.go:130] >       "pinned": false
	I1127 23:54:02.882866 1525568 command_runner.go:130] >     },
	I1127 23:54:02.882871 1525568 command_runner.go:130] >     {
	I1127 23:54:02.882879 1525568 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1127 23:54:02.882884 1525568 command_runner.go:130] >       "repoTags": [
	I1127 23:54:02.882891 1525568 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1127 23:54:02.882895 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.882900 1525568 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:02.882909 1525568 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1127 23:54:02.882919 1525568 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1127 23:54:02.882923 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.882928 1525568 command_runner.go:130] >       "size": "51393451",
	I1127 23:54:02.882933 1525568 command_runner.go:130] >       "uid": null,
	I1127 23:54:02.882938 1525568 command_runner.go:130] >       "username": "",
	I1127 23:54:02.882943 1525568 command_runner.go:130] >       "spec": null,
	I1127 23:54:02.882950 1525568 command_runner.go:130] >       "pinned": false
	I1127 23:54:02.882954 1525568 command_runner.go:130] >     },
	I1127 23:54:02.882958 1525568 command_runner.go:130] >     {
	I1127 23:54:02.882966 1525568 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1127 23:54:02.882971 1525568 command_runner.go:130] >       "repoTags": [
	I1127 23:54:02.882977 1525568 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1127 23:54:02.882983 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.882988 1525568 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:02.882997 1525568 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1127 23:54:02.883005 1525568 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1127 23:54:02.883018 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.883023 1525568 command_runner.go:130] >       "size": "182203183",
	I1127 23:54:02.883028 1525568 command_runner.go:130] >       "uid": {
	I1127 23:54:02.883033 1525568 command_runner.go:130] >         "value": "0"
	I1127 23:54:02.883037 1525568 command_runner.go:130] >       },
	I1127 23:54:02.883042 1525568 command_runner.go:130] >       "username": "",
	I1127 23:54:02.883047 1525568 command_runner.go:130] >       "spec": null,
	I1127 23:54:02.883052 1525568 command_runner.go:130] >       "pinned": false
	I1127 23:54:02.883056 1525568 command_runner.go:130] >     },
	I1127 23:54:02.883060 1525568 command_runner.go:130] >     {
	I1127 23:54:02.883068 1525568 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I1127 23:54:02.883073 1525568 command_runner.go:130] >       "repoTags": [
	I1127 23:54:02.883079 1525568 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1127 23:54:02.883085 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.883091 1525568 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:02.883100 1525568 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I1127 23:54:02.883109 1525568 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I1127 23:54:02.883114 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.883119 1525568 command_runner.go:130] >       "size": "121119694",
	I1127 23:54:02.883123 1525568 command_runner.go:130] >       "uid": {
	I1127 23:54:02.883137 1525568 command_runner.go:130] >         "value": "0"
	I1127 23:54:02.883143 1525568 command_runner.go:130] >       },
	I1127 23:54:02.883149 1525568 command_runner.go:130] >       "username": "",
	I1127 23:54:02.883153 1525568 command_runner.go:130] >       "spec": null,
	I1127 23:54:02.883158 1525568 command_runner.go:130] >       "pinned": false
	I1127 23:54:02.883164 1525568 command_runner.go:130] >     },
	I1127 23:54:02.883169 1525568 command_runner.go:130] >     {
	I1127 23:54:02.883176 1525568 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I1127 23:54:02.883181 1525568 command_runner.go:130] >       "repoTags": [
	I1127 23:54:02.883188 1525568 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1127 23:54:02.883192 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.883197 1525568 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:02.883209 1525568 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1127 23:54:02.883218 1525568 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I1127 23:54:02.883223 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.883229 1525568 command_runner.go:130] >       "size": "117252916",
	I1127 23:54:02.883234 1525568 command_runner.go:130] >       "uid": {
	I1127 23:54:02.883239 1525568 command_runner.go:130] >         "value": "0"
	I1127 23:54:02.883243 1525568 command_runner.go:130] >       },
	I1127 23:54:02.883248 1525568 command_runner.go:130] >       "username": "",
	I1127 23:54:02.883252 1525568 command_runner.go:130] >       "spec": null,
	I1127 23:54:02.883257 1525568 command_runner.go:130] >       "pinned": false
	I1127 23:54:02.883263 1525568 command_runner.go:130] >     },
	I1127 23:54:02.883267 1525568 command_runner.go:130] >     {
	I1127 23:54:02.883274 1525568 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I1127 23:54:02.883280 1525568 command_runner.go:130] >       "repoTags": [
	I1127 23:54:02.883286 1525568 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1127 23:54:02.883290 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.883295 1525568 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:02.883303 1525568 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I1127 23:54:02.883314 1525568 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1127 23:54:02.883318 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.883323 1525568 command_runner.go:130] >       "size": "69992343",
	I1127 23:54:02.883328 1525568 command_runner.go:130] >       "uid": null,
	I1127 23:54:02.883332 1525568 command_runner.go:130] >       "username": "",
	I1127 23:54:02.883337 1525568 command_runner.go:130] >       "spec": null,
	I1127 23:54:02.883342 1525568 command_runner.go:130] >       "pinned": false
	I1127 23:54:02.883346 1525568 command_runner.go:130] >     },
	I1127 23:54:02.883350 1525568 command_runner.go:130] >     {
	I1127 23:54:02.883358 1525568 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I1127 23:54:02.883363 1525568 command_runner.go:130] >       "repoTags": [
	I1127 23:54:02.883369 1525568 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1127 23:54:02.883373 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.883378 1525568 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:02.883415 1525568 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1127 23:54:02.883425 1525568 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I1127 23:54:02.883429 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.883434 1525568 command_runner.go:130] >       "size": "59253556",
	I1127 23:54:02.883441 1525568 command_runner.go:130] >       "uid": {
	I1127 23:54:02.883446 1525568 command_runner.go:130] >         "value": "0"
	I1127 23:54:02.883450 1525568 command_runner.go:130] >       },
	I1127 23:54:02.883455 1525568 command_runner.go:130] >       "username": "",
	I1127 23:54:02.883460 1525568 command_runner.go:130] >       "spec": null,
	I1127 23:54:02.883464 1525568 command_runner.go:130] >       "pinned": false
	I1127 23:54:02.883468 1525568 command_runner.go:130] >     },
	I1127 23:54:02.883473 1525568 command_runner.go:130] >     {
	I1127 23:54:02.883482 1525568 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1127 23:54:02.883486 1525568 command_runner.go:130] >       "repoTags": [
	I1127 23:54:02.883492 1525568 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1127 23:54:02.883496 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.883501 1525568 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:02.883510 1525568 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1127 23:54:02.883519 1525568 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1127 23:54:02.883523 1525568 command_runner.go:130] >       ],
	I1127 23:54:02.883528 1525568 command_runner.go:130] >       "size": "520014",
	I1127 23:54:02.883533 1525568 command_runner.go:130] >       "uid": {
	I1127 23:54:02.883539 1525568 command_runner.go:130] >         "value": "65535"
	I1127 23:54:02.883543 1525568 command_runner.go:130] >       },
	I1127 23:54:02.883548 1525568 command_runner.go:130] >       "username": "",
	I1127 23:54:02.883552 1525568 command_runner.go:130] >       "spec": null,
	I1127 23:54:02.883557 1525568 command_runner.go:130] >       "pinned": false
	I1127 23:54:02.883561 1525568 command_runner.go:130] >     }
	I1127 23:54:02.883565 1525568 command_runner.go:130] >   ]
	I1127 23:54:02.883569 1525568 command_runner.go:130] > }
	I1127 23:54:02.885148 1525568 crio.go:496] all images are preloaded for cri-o runtime.
	I1127 23:54:02.885165 1525568 cache_images.go:84] Images are preloaded, skipping loading
	I1127 23:54:02.885235 1525568 ssh_runner.go:195] Run: crio config
	I1127 23:54:02.940484 1525568 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1127 23:54:02.940523 1525568 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1127 23:54:02.940532 1525568 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1127 23:54:02.940540 1525568 command_runner.go:130] > #
	I1127 23:54:02.940553 1525568 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1127 23:54:02.940561 1525568 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1127 23:54:02.940569 1525568 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1127 23:54:02.940589 1525568 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1127 23:54:02.940598 1525568 command_runner.go:130] > # reload'.
	I1127 23:54:02.940606 1525568 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1127 23:54:02.940614 1525568 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1127 23:54:02.940625 1525568 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1127 23:54:02.940632 1525568 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1127 23:54:02.940637 1525568 command_runner.go:130] > [crio]
	I1127 23:54:02.940648 1525568 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1127 23:54:02.940655 1525568 command_runner.go:130] > # containers images, in this directory.
	I1127 23:54:02.940665 1525568 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1127 23:54:02.940676 1525568 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1127 23:54:02.940683 1525568 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1127 23:54:02.940694 1525568 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1127 23:54:02.940702 1525568 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1127 23:54:02.940717 1525568 command_runner.go:130] > # storage_driver = "vfs"
	I1127 23:54:02.940724 1525568 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1127 23:54:02.940734 1525568 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1127 23:54:02.940740 1525568 command_runner.go:130] > # storage_option = [
	I1127 23:54:02.940746 1525568 command_runner.go:130] > # ]
	I1127 23:54:02.940754 1525568 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1127 23:54:02.940765 1525568 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1127 23:54:02.940771 1525568 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1127 23:54:02.940781 1525568 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1127 23:54:02.940789 1525568 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1127 23:54:02.940798 1525568 command_runner.go:130] > # always happen on a node reboot
	I1127 23:54:02.940804 1525568 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1127 23:54:02.940811 1525568 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1127 23:54:02.940821 1525568 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1127 23:54:02.940833 1525568 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1127 23:54:02.940844 1525568 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1127 23:54:02.940855 1525568 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1127 23:54:02.940868 1525568 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1127 23:54:02.940875 1525568 command_runner.go:130] > # internal_wipe = true
	I1127 23:54:02.940885 1525568 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1127 23:54:02.940893 1525568 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1127 23:54:02.940900 1525568 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1127 23:54:02.941148 1525568 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1127 23:54:02.941170 1525568 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1127 23:54:02.941175 1525568 command_runner.go:130] > [crio.api]
	I1127 23:54:02.941187 1525568 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1127 23:54:02.941193 1525568 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1127 23:54:02.941200 1525568 command_runner.go:130] > # IP address on which the stream server will listen.
	I1127 23:54:02.941210 1525568 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1127 23:54:02.941219 1525568 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1127 23:54:02.941229 1525568 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1127 23:54:02.941235 1525568 command_runner.go:130] > # stream_port = "0"
	I1127 23:54:02.941246 1525568 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1127 23:54:02.941256 1525568 command_runner.go:130] > # stream_enable_tls = false
	I1127 23:54:02.941263 1525568 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1127 23:54:02.941268 1525568 command_runner.go:130] > # stream_idle_timeout = ""
	I1127 23:54:02.941278 1525568 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1127 23:54:02.941286 1525568 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1127 23:54:02.941294 1525568 command_runner.go:130] > # minutes.
	I1127 23:54:02.941299 1525568 command_runner.go:130] > # stream_tls_cert = ""
	I1127 23:54:02.941306 1525568 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1127 23:54:02.941317 1525568 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1127 23:54:02.941322 1525568 command_runner.go:130] > # stream_tls_key = ""
	I1127 23:54:02.941334 1525568 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1127 23:54:02.941342 1525568 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1127 23:54:02.941351 1525568 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1127 23:54:02.941356 1525568 command_runner.go:130] > # stream_tls_ca = ""
	I1127 23:54:02.941365 1525568 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1127 23:54:02.941376 1525568 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1127 23:54:02.941385 1525568 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1127 23:54:02.941395 1525568 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1127 23:54:02.941416 1525568 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1127 23:54:02.941426 1525568 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1127 23:54:02.941432 1525568 command_runner.go:130] > [crio.runtime]
	I1127 23:54:02.941443 1525568 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1127 23:54:02.941450 1525568 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1127 23:54:02.941455 1525568 command_runner.go:130] > # "nofile=1024:2048"
	I1127 23:54:02.941465 1525568 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1127 23:54:02.941471 1525568 command_runner.go:130] > # default_ulimits = [
	I1127 23:54:02.941477 1525568 command_runner.go:130] > # ]
	I1127 23:54:02.941484 1525568 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1127 23:54:02.941494 1525568 command_runner.go:130] > # no_pivot = false
	I1127 23:54:02.941501 1525568 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1127 23:54:02.941512 1525568 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1127 23:54:02.941518 1525568 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1127 23:54:02.941528 1525568 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1127 23:54:02.941535 1525568 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1127 23:54:02.941543 1525568 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1127 23:54:02.941719 1525568 command_runner.go:130] > # conmon = ""
	I1127 23:54:02.941733 1525568 command_runner.go:130] > # Cgroup setting for conmon
	I1127 23:54:02.941743 1525568 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1127 23:54:02.941751 1525568 command_runner.go:130] > conmon_cgroup = "pod"
	I1127 23:54:02.941763 1525568 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1127 23:54:02.941770 1525568 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1127 23:54:02.941780 1525568 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1127 23:54:02.941790 1525568 command_runner.go:130] > # conmon_env = [
	I1127 23:54:02.941794 1525568 command_runner.go:130] > # ]
	I1127 23:54:02.941801 1525568 command_runner.go:130] > # Additional environment variables to set for all the
	I1127 23:54:02.941811 1525568 command_runner.go:130] > # containers. These are overridden if set in the
	I1127 23:54:02.941819 1525568 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1127 23:54:02.941824 1525568 command_runner.go:130] > # default_env = [
	I1127 23:54:02.941828 1525568 command_runner.go:130] > # ]
	I1127 23:54:02.941841 1525568 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1127 23:54:02.942071 1525568 command_runner.go:130] > # selinux = false
	I1127 23:54:02.942089 1525568 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1127 23:54:02.942099 1525568 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1127 23:54:02.942109 1525568 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1127 23:54:02.942114 1525568 command_runner.go:130] > # seccomp_profile = ""
	I1127 23:54:02.942122 1525568 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1127 23:54:02.942134 1525568 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1127 23:54:02.942142 1525568 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1127 23:54:02.942152 1525568 command_runner.go:130] > # which might increase security.
	I1127 23:54:02.942158 1525568 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1127 23:54:02.942166 1525568 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1127 23:54:02.942177 1525568 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1127 23:54:02.942185 1525568 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1127 23:54:02.942193 1525568 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1127 23:54:02.942201 1525568 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:54:02.942207 1525568 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1127 23:54:02.942225 1525568 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1127 23:54:02.942234 1525568 command_runner.go:130] > # the cgroup blockio controller.
	I1127 23:54:02.942241 1525568 command_runner.go:130] > # blockio_config_file = ""
	I1127 23:54:02.942253 1525568 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1127 23:54:02.942258 1525568 command_runner.go:130] > # irqbalance daemon.
	I1127 23:54:02.942269 1525568 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1127 23:54:02.942277 1525568 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1127 23:54:02.942285 1525568 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:54:02.942290 1525568 command_runner.go:130] > # rdt_config_file = ""
	I1127 23:54:02.942297 1525568 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1127 23:54:02.942308 1525568 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1127 23:54:02.942316 1525568 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1127 23:54:02.942324 1525568 command_runner.go:130] > # separate_pull_cgroup = ""
	I1127 23:54:02.942333 1525568 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1127 23:54:02.942343 1525568 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1127 23:54:02.942349 1525568 command_runner.go:130] > # will be added.
	I1127 23:54:02.942358 1525568 command_runner.go:130] > # default_capabilities = [
	I1127 23:54:02.942362 1525568 command_runner.go:130] > # 	"CHOWN",
	I1127 23:54:02.942367 1525568 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1127 23:54:02.942375 1525568 command_runner.go:130] > # 	"FSETID",
	I1127 23:54:02.942382 1525568 command_runner.go:130] > # 	"FOWNER",
	I1127 23:54:02.942387 1525568 command_runner.go:130] > # 	"SETGID",
	I1127 23:54:02.942393 1525568 command_runner.go:130] > # 	"SETUID",
	I1127 23:54:02.942402 1525568 command_runner.go:130] > # 	"SETPCAP",
	I1127 23:54:02.942408 1525568 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1127 23:54:02.942583 1525568 command_runner.go:130] > # 	"KILL",
	I1127 23:54:02.942597 1525568 command_runner.go:130] > # ]
	I1127 23:54:02.942621 1525568 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1127 23:54:02.942635 1525568 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1127 23:54:02.942642 1525568 command_runner.go:130] > # add_inheritable_capabilities = true
	I1127 23:54:02.942653 1525568 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1127 23:54:02.942660 1525568 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1127 23:54:02.942669 1525568 command_runner.go:130] > # default_sysctls = [
	I1127 23:54:02.942673 1525568 command_runner.go:130] > # ]
	I1127 23:54:02.942679 1525568 command_runner.go:130] > # List of devices on the host that a
	I1127 23:54:02.942690 1525568 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1127 23:54:02.942695 1525568 command_runner.go:130] > # allowed_devices = [
	I1127 23:54:02.942701 1525568 command_runner.go:130] > # 	"/dev/fuse",
	I1127 23:54:02.942705 1525568 command_runner.go:130] > # ]
	I1127 23:54:02.942714 1525568 command_runner.go:130] > # List of additional devices. specified as
	I1127 23:54:02.942743 1525568 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1127 23:54:02.942754 1525568 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1127 23:54:02.942763 1525568 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1127 23:54:02.942774 1525568 command_runner.go:130] > # additional_devices = [
	I1127 23:54:02.942778 1525568 command_runner.go:130] > # ]
	I1127 23:54:02.942786 1525568 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1127 23:54:02.942791 1525568 command_runner.go:130] > # cdi_spec_dirs = [
	I1127 23:54:02.942798 1525568 command_runner.go:130] > # 	"/etc/cdi",
	I1127 23:54:02.942807 1525568 command_runner.go:130] > # 	"/var/run/cdi",
	I1127 23:54:02.942812 1525568 command_runner.go:130] > # ]
	I1127 23:54:02.942819 1525568 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1127 23:54:02.942830 1525568 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1127 23:54:02.942836 1525568 command_runner.go:130] > # Defaults to false.
	I1127 23:54:02.942845 1525568 command_runner.go:130] > # device_ownership_from_security_context = false
	I1127 23:54:02.942853 1525568 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1127 23:54:02.942861 1525568 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1127 23:54:02.942868 1525568 command_runner.go:130] > # hooks_dir = [
	I1127 23:54:02.943069 1525568 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1127 23:54:02.943083 1525568 command_runner.go:130] > # ]
	I1127 23:54:02.943091 1525568 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1127 23:54:02.943104 1525568 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1127 23:54:02.943113 1525568 command_runner.go:130] > # its default mounts from the following two files:
	I1127 23:54:02.943118 1525568 command_runner.go:130] > #
	I1127 23:54:02.943139 1525568 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1127 23:54:02.943147 1525568 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1127 23:54:02.943158 1525568 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1127 23:54:02.943162 1525568 command_runner.go:130] > #
	I1127 23:54:02.943170 1525568 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1127 23:54:02.943181 1525568 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1127 23:54:02.943189 1525568 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1127 23:54:02.943197 1525568 command_runner.go:130] > #      only add mounts it finds in this file.
	I1127 23:54:02.943201 1525568 command_runner.go:130] > #
	I1127 23:54:02.943207 1525568 command_runner.go:130] > # default_mounts_file = ""
	I1127 23:54:02.943219 1525568 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1127 23:54:02.943228 1525568 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1127 23:54:02.943236 1525568 command_runner.go:130] > # pids_limit = 0
	I1127 23:54:02.943244 1525568 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1127 23:54:02.943254 1525568 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1127 23:54:02.943268 1525568 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1127 23:54:02.943281 1525568 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1127 23:54:02.943287 1525568 command_runner.go:130] > # log_size_max = -1
	I1127 23:54:02.943295 1525568 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1127 23:54:02.943305 1525568 command_runner.go:130] > # log_to_journald = false
	I1127 23:54:02.943313 1525568 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1127 23:54:02.943323 1525568 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1127 23:54:02.943329 1525568 command_runner.go:130] > # Path to directory for container attach sockets.
	I1127 23:54:02.943338 1525568 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1127 23:54:02.943345 1525568 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1127 23:54:02.943353 1525568 command_runner.go:130] > # bind_mount_prefix = ""
	I1127 23:54:02.943360 1525568 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1127 23:54:02.943365 1525568 command_runner.go:130] > # read_only = false
	I1127 23:54:02.943374 1525568 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1127 23:54:02.943382 1525568 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1127 23:54:02.943391 1525568 command_runner.go:130] > # live configuration reload.
	I1127 23:54:02.943396 1525568 command_runner.go:130] > # log_level = "info"
	I1127 23:54:02.943403 1525568 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1127 23:54:02.943415 1525568 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:54:02.943420 1525568 command_runner.go:130] > # log_filter = ""
	I1127 23:54:02.943431 1525568 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1127 23:54:02.943439 1525568 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1127 23:54:02.943447 1525568 command_runner.go:130] > # separated by comma.
	I1127 23:54:02.943452 1525568 command_runner.go:130] > # uid_mappings = ""
	I1127 23:54:02.943459 1525568 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1127 23:54:02.943468 1525568 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1127 23:54:02.943474 1525568 command_runner.go:130] > # separated by comma.
	I1127 23:54:02.943480 1525568 command_runner.go:130] > # gid_mappings = ""
	I1127 23:54:02.943492 1525568 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1127 23:54:02.943499 1525568 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1127 23:54:02.943527 1525568 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1127 23:54:02.943538 1525568 command_runner.go:130] > # minimum_mappable_uid = -1
	I1127 23:54:02.943547 1525568 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1127 23:54:02.943554 1525568 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1127 23:54:02.943562 1525568 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1127 23:54:02.943567 1525568 command_runner.go:130] > # minimum_mappable_gid = -1
	I1127 23:54:02.943580 1525568 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1127 23:54:02.943588 1525568 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1127 23:54:02.943598 1525568 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1127 23:54:02.943604 1525568 command_runner.go:130] > # ctr_stop_timeout = 30
	I1127 23:54:02.943615 1525568 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1127 23:54:02.943622 1525568 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1127 23:54:02.943630 1525568 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1127 23:54:02.943636 1525568 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1127 23:54:02.943644 1525568 command_runner.go:130] > # drop_infra_ctr = true
	I1127 23:54:02.943652 1525568 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1127 23:54:02.943662 1525568 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1127 23:54:02.943672 1525568 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1127 23:54:02.943996 1525568 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1127 23:54:02.944016 1525568 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1127 23:54:02.944024 1525568 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1127 23:54:02.944029 1525568 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1127 23:54:02.944038 1525568 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1127 23:54:02.944043 1525568 command_runner.go:130] > # pinns_path = ""
	I1127 23:54:02.944051 1525568 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1127 23:54:02.944072 1525568 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1127 23:54:02.944087 1525568 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1127 23:54:02.944093 1525568 command_runner.go:130] > # default_runtime = "runc"
	I1127 23:54:02.944099 1525568 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1127 23:54:02.944109 1525568 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1127 23:54:02.944123 1525568 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1127 23:54:02.944132 1525568 command_runner.go:130] > # creation as a file is not desired either.
	I1127 23:54:02.944144 1525568 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1127 23:54:02.944154 1525568 command_runner.go:130] > # the hostname is being managed dynamically.
	I1127 23:54:02.944160 1525568 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1127 23:54:02.944167 1525568 command_runner.go:130] > # ]
	I1127 23:54:02.944175 1525568 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1127 23:54:02.944183 1525568 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1127 23:54:02.944195 1525568 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1127 23:54:02.944204 1525568 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1127 23:54:02.944208 1525568 command_runner.go:130] > #
	I1127 23:54:02.944215 1525568 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1127 23:54:02.944223 1525568 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1127 23:54:02.944229 1525568 command_runner.go:130] > #  runtime_type = "oci"
	I1127 23:54:02.944237 1525568 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1127 23:54:02.944243 1525568 command_runner.go:130] > #  privileged_without_host_devices = false
	I1127 23:54:02.944248 1525568 command_runner.go:130] > #  allowed_annotations = []
	I1127 23:54:02.944253 1525568 command_runner.go:130] > # Where:
	I1127 23:54:02.944262 1525568 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1127 23:54:02.944275 1525568 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1127 23:54:02.944285 1525568 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1127 23:54:02.944293 1525568 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1127 23:54:02.944298 1525568 command_runner.go:130] > #   in $PATH.
	I1127 23:54:02.944305 1525568 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1127 23:54:02.944313 1525568 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1127 23:54:02.944322 1525568 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1127 23:54:02.944329 1525568 command_runner.go:130] > #   state.
	I1127 23:54:02.944337 1525568 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1127 23:54:02.944344 1525568 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1127 23:54:02.944355 1525568 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1127 23:54:02.944364 1525568 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1127 23:54:02.944375 1525568 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1127 23:54:02.944383 1525568 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1127 23:54:02.944389 1525568 command_runner.go:130] > #   The currently recognized values are:
	I1127 23:54:02.944399 1525568 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1127 23:54:02.944408 1525568 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1127 23:54:02.944415 1525568 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1127 23:54:02.944427 1525568 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1127 23:54:02.944498 1525568 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1127 23:54:02.944515 1525568 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1127 23:54:02.944523 1525568 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1127 23:54:02.944534 1525568 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1127 23:54:02.944545 1525568 command_runner.go:130] > #   should be moved to the container's cgroup
	I1127 23:54:02.944554 1525568 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1127 23:54:02.944574 1525568 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1127 23:54:02.944586 1525568 command_runner.go:130] > runtime_type = "oci"
	I1127 23:54:02.944592 1525568 command_runner.go:130] > runtime_root = "/run/runc"
	I1127 23:54:02.944607 1525568 command_runner.go:130] > runtime_config_path = ""
	I1127 23:54:02.944612 1525568 command_runner.go:130] > monitor_path = ""
	I1127 23:54:02.944617 1525568 command_runner.go:130] > monitor_cgroup = ""
	I1127 23:54:02.944622 1525568 command_runner.go:130] > monitor_exec_cgroup = ""
	I1127 23:54:02.944659 1525568 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1127 23:54:02.944667 1525568 command_runner.go:130] > # running containers
	I1127 23:54:02.944673 1525568 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1127 23:54:02.944681 1525568 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1127 23:54:02.944695 1525568 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1127 23:54:02.944703 1525568 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1127 23:54:02.944709 1525568 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1127 23:54:02.944715 1525568 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1127 23:54:02.944730 1525568 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1127 23:54:02.944736 1525568 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1127 23:54:02.944742 1525568 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1127 23:54:02.944750 1525568 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1127 23:54:02.944758 1525568 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1127 23:54:02.944767 1525568 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1127 23:54:02.944774 1525568 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1127 23:54:02.944783 1525568 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1127 23:54:02.944793 1525568 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1127 23:54:02.944803 1525568 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1127 23:54:02.944814 1525568 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1127 23:54:02.944824 1525568 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1127 23:54:02.944835 1525568 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1127 23:54:02.944844 1525568 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1127 23:54:02.944853 1525568 command_runner.go:130] > # Example:
	I1127 23:54:02.944859 1525568 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1127 23:54:02.944865 1525568 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1127 23:54:02.944871 1525568 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1127 23:54:02.944878 1525568 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1127 23:54:02.944886 1525568 command_runner.go:130] > # cpuset = 0
	I1127 23:54:02.944890 1525568 command_runner.go:130] > # cpushares = "0-1"
	I1127 23:54:02.944895 1525568 command_runner.go:130] > # Where:
	I1127 23:54:02.944901 1525568 command_runner.go:130] > # The workload name is workload-type.
	I1127 23:54:02.944911 1525568 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1127 23:54:02.944921 1525568 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1127 23:54:02.944928 1525568 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1127 23:54:02.944941 1525568 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1127 23:54:02.944949 1525568 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1127 23:54:02.944953 1525568 command_runner.go:130] > # 
	I1127 23:54:02.944961 1525568 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1127 23:54:02.944967 1525568 command_runner.go:130] > #
	I1127 23:54:02.944976 1525568 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1127 23:54:02.944989 1525568 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1127 23:54:02.944997 1525568 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1127 23:54:02.945008 1525568 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1127 23:54:02.945016 1525568 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1127 23:54:02.945024 1525568 command_runner.go:130] > [crio.image]
	I1127 23:54:02.945031 1525568 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1127 23:54:02.945037 1525568 command_runner.go:130] > # default_transport = "docker://"
	I1127 23:54:02.945045 1525568 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1127 23:54:02.945053 1525568 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1127 23:54:02.945060 1525568 command_runner.go:130] > # global_auth_file = ""
	I1127 23:54:02.945067 1525568 command_runner.go:130] > # The image used to instantiate infra containers.
	I1127 23:54:02.945073 1525568 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:54:02.945079 1525568 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1127 23:54:02.945090 1525568 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1127 23:54:02.945097 1525568 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1127 23:54:02.945106 1525568 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:54:02.945112 1525568 command_runner.go:130] > # pause_image_auth_file = ""
	I1127 23:54:02.945119 1525568 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1127 23:54:02.945190 1525568 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1127 23:54:02.945205 1525568 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1127 23:54:02.945213 1525568 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1127 23:54:02.945218 1525568 command_runner.go:130] > # pause_command = "/pause"
	I1127 23:54:02.945228 1525568 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1127 23:54:02.945236 1525568 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1127 23:54:02.945330 1525568 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1127 23:54:02.945340 1525568 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1127 23:54:02.945350 1525568 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1127 23:54:02.945355 1525568 command_runner.go:130] > # signature_policy = ""
	I1127 23:54:02.945362 1525568 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1127 23:54:02.945377 1525568 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1127 23:54:02.945382 1525568 command_runner.go:130] > # changing them here.
	I1127 23:54:02.945390 1525568 command_runner.go:130] > # insecure_registries = [
	I1127 23:54:02.945776 1525568 command_runner.go:130] > # ]
	I1127 23:54:02.945795 1525568 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1127 23:54:02.945802 1525568 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1127 23:54:02.945816 1525568 command_runner.go:130] > # image_volumes = "mkdir"
	I1127 23:54:02.945827 1525568 command_runner.go:130] > # Temporary directory to use for storing big files
	I1127 23:54:02.945833 1525568 command_runner.go:130] > # big_files_temporary_dir = ""
	I1127 23:54:02.945861 1525568 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1127 23:54:02.945872 1525568 command_runner.go:130] > # CNI plugins.
	I1127 23:54:02.945877 1525568 command_runner.go:130] > [crio.network]
	I1127 23:54:02.945885 1525568 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1127 23:54:02.945902 1525568 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1127 23:54:02.945908 1525568 command_runner.go:130] > # cni_default_network = ""
	I1127 23:54:02.945916 1525568 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1127 23:54:02.945927 1525568 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1127 23:54:02.945934 1525568 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1127 23:54:02.945939 1525568 command_runner.go:130] > # plugin_dirs = [
	I1127 23:54:02.945944 1525568 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1127 23:54:02.945952 1525568 command_runner.go:130] > # ]
	I1127 23:54:02.945959 1525568 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1127 23:54:02.945964 1525568 command_runner.go:130] > [crio.metrics]
	I1127 23:54:02.945973 1525568 command_runner.go:130] > # Globally enable or disable metrics support.
	I1127 23:54:02.945979 1525568 command_runner.go:130] > # enable_metrics = false
	I1127 23:54:02.945985 1525568 command_runner.go:130] > # Specify enabled metrics collectors.
	I1127 23:54:02.945991 1525568 command_runner.go:130] > # Per default all metrics are enabled.
	I1127 23:54:02.946000 1525568 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1127 23:54:02.946012 1525568 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1127 23:54:02.946022 1525568 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1127 23:54:02.946030 1525568 command_runner.go:130] > # metrics_collectors = [
	I1127 23:54:02.946035 1525568 command_runner.go:130] > # 	"operations",
	I1127 23:54:02.946044 1525568 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1127 23:54:02.946054 1525568 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1127 23:54:02.946059 1525568 command_runner.go:130] > # 	"operations_errors",
	I1127 23:54:02.946064 1525568 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1127 23:54:02.946069 1525568 command_runner.go:130] > # 	"image_pulls_by_name",
	I1127 23:54:02.946075 1525568 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1127 23:54:02.946083 1525568 command_runner.go:130] > # 	"image_pulls_failures",
	I1127 23:54:02.946089 1525568 command_runner.go:130] > # 	"image_pulls_successes",
	I1127 23:54:02.946097 1525568 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1127 23:54:02.946102 1525568 command_runner.go:130] > # 	"image_layer_reuse",
	I1127 23:54:02.946110 1525568 command_runner.go:130] > # 	"containers_oom_total",
	I1127 23:54:02.946115 1525568 command_runner.go:130] > # 	"containers_oom",
	I1127 23:54:02.946123 1525568 command_runner.go:130] > # 	"processes_defunct",
	I1127 23:54:02.946131 1525568 command_runner.go:130] > # 	"operations_total",
	I1127 23:54:02.946137 1525568 command_runner.go:130] > # 	"operations_latency_seconds",
	I1127 23:54:02.946143 1525568 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1127 23:54:02.946153 1525568 command_runner.go:130] > # 	"operations_errors_total",
	I1127 23:54:02.946362 1525568 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1127 23:54:02.946378 1525568 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1127 23:54:02.946388 1525568 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1127 23:54:02.946394 1525568 command_runner.go:130] > # 	"image_pulls_success_total",
	I1127 23:54:02.946409 1525568 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1127 23:54:02.946418 1525568 command_runner.go:130] > # 	"containers_oom_count_total",
	I1127 23:54:02.946427 1525568 command_runner.go:130] > # ]
	I1127 23:54:02.946433 1525568 command_runner.go:130] > # The port on which the metrics server will listen.
	I1127 23:54:02.946439 1525568 command_runner.go:130] > # metrics_port = 9090
	I1127 23:54:02.946446 1525568 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1127 23:54:02.946450 1525568 command_runner.go:130] > # metrics_socket = ""
	I1127 23:54:02.946460 1525568 command_runner.go:130] > # The certificate for the secure metrics server.
	I1127 23:54:02.946468 1525568 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1127 23:54:02.946478 1525568 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1127 23:54:02.946487 1525568 command_runner.go:130] > # certificate on any modification event.
	I1127 23:54:02.946496 1525568 command_runner.go:130] > # metrics_cert = ""
	I1127 23:54:02.946503 1525568 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1127 23:54:02.946512 1525568 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1127 23:54:02.946517 1525568 command_runner.go:130] > # metrics_key = ""
	I1127 23:54:02.946524 1525568 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1127 23:54:02.946529 1525568 command_runner.go:130] > [crio.tracing]
	I1127 23:54:02.946539 1525568 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1127 23:54:02.946546 1525568 command_runner.go:130] > # enable_tracing = false
	I1127 23:54:02.946556 1525568 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1127 23:54:02.946566 1525568 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1127 23:54:02.946576 1525568 command_runner.go:130] > # Number of samples to collect per million spans.
	I1127 23:54:02.946585 1525568 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1127 23:54:02.946593 1525568 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1127 23:54:02.946597 1525568 command_runner.go:130] > [crio.stats]
	I1127 23:54:02.946605 1525568 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1127 23:54:02.946616 1525568 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1127 23:54:02.946622 1525568 command_runner.go:130] > # stats_collection_period = 0
	I1127 23:54:02.948173 1525568 command_runner.go:130] ! time="2023-11-27 23:54:02.934064502Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1127 23:54:02.948200 1525568 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1127 23:54:02.948292 1525568 cni.go:84] Creating CNI manager for ""
	I1127 23:54:02.948304 1525568 cni.go:136] 1 nodes found, recommending kindnet
	I1127 23:54:02.948334 1525568 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1127 23:54:02.948356 1525568 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-784312 NodeName:multinode-784312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1127 23:54:02.948505 1525568 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-784312"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1127 23:54:02.948575 1525568 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-784312 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-784312 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1127 23:54:02.948643 1525568 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1127 23:54:02.958341 1525568 command_runner.go:130] > kubeadm
	I1127 23:54:02.958359 1525568 command_runner.go:130] > kubectl
	I1127 23:54:02.958364 1525568 command_runner.go:130] > kubelet
	I1127 23:54:02.959644 1525568 binaries.go:44] Found k8s binaries, skipping transfer
	I1127 23:54:02.959721 1525568 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1127 23:54:02.970441 1525568 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1127 23:54:02.991951 1525568 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1127 23:54:03.015937 1525568 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1127 23:54:03.039064 1525568 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1127 23:54:03.044218 1525568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:54:03.058918 1525568 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312 for IP: 192.168.58.2
	I1127 23:54:03.058954 1525568 certs.go:190] acquiring lock for shared ca certs: {Name:mk268ef230412b241734813f303d69d9b36c42ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:54:03.059097 1525568 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.key
	I1127 23:54:03.059168 1525568 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-1455288/.minikube/proxy-client-ca.key
	I1127 23:54:03.059222 1525568 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/client.key
	I1127 23:54:03.059260 1525568 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/client.crt with IP's: []
	I1127 23:54:03.536725 1525568 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/client.crt ...
	I1127 23:54:03.536757 1525568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/client.crt: {Name:mkd3230338b57d00dfe7559da616ed0540eb1baa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:54:03.536963 1525568 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/client.key ...
	I1127 23:54:03.536977 1525568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/client.key: {Name:mk130645d0320d7ff7c7ca3df23fd33b6db77733 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:54:03.537092 1525568 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/apiserver.key.cee25041
	I1127 23:54:03.537112 1525568 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1127 23:54:03.874345 1525568 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/apiserver.crt.cee25041 ...
	I1127 23:54:03.874377 1525568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/apiserver.crt.cee25041: {Name:mkb438a989048a9669d8f53fa38e62df6566de08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:54:03.874591 1525568 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/apiserver.key.cee25041 ...
	I1127 23:54:03.874608 1525568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/apiserver.key.cee25041: {Name:mkfee61c98e502e111526e499344936e0702a8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:54:03.874709 1525568 certs.go:337] copying /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/apiserver.crt
	I1127 23:54:03.874790 1525568 certs.go:341] copying /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/apiserver.key
	I1127 23:54:03.874849 1525568 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/proxy-client.key
	I1127 23:54:03.874865 1525568 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/proxy-client.crt with IP's: []
	I1127 23:54:04.276909 1525568 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/proxy-client.crt ...
	I1127 23:54:04.276944 1525568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/proxy-client.crt: {Name:mk8286943a9a1ede2928249f62c9fb98565ef9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:54:04.277131 1525568 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/proxy-client.key ...
	I1127 23:54:04.277148 1525568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/proxy-client.key: {Name:mk7d0db27836d19aa3984513fd7ab843008f4aa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:54:04.277226 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1127 23:54:04.277248 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1127 23:54:04.277263 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1127 23:54:04.277278 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1127 23:54:04.277289 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1127 23:54:04.277307 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1127 23:54:04.277322 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1127 23:54:04.277333 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1127 23:54:04.277387 1525568 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/1460652.pem (1338 bytes)
	W1127 23:54:04.277428 1525568 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/1460652_empty.pem, impossibly tiny 0 bytes
	I1127 23:54:04.277442 1525568 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca-key.pem (1679 bytes)
	I1127 23:54:04.277471 1525568 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem (1078 bytes)
	I1127 23:54:04.277503 1525568 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/cert.pem (1123 bytes)
	I1127 23:54:04.277533 1525568 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/key.pem (1679 bytes)
	I1127 23:54:04.277582 1525568 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/14606522.pem (1708 bytes)
	I1127 23:54:04.277613 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/14606522.pem -> /usr/share/ca-certificates/14606522.pem
	I1127 23:54:04.277636 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:54:04.277656 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/1460652.pem -> /usr/share/ca-certificates/1460652.pem
	I1127 23:54:04.278304 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1127 23:54:04.309825 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1127 23:54:04.338489 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1127 23:54:04.366763 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1127 23:54:04.395097 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1127 23:54:04.423010 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1127 23:54:04.450739 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1127 23:54:04.479135 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1127 23:54:04.507500 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/14606522.pem --> /usr/share/ca-certificates/14606522.pem (1708 bytes)
	I1127 23:54:04.535949 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1127 23:54:04.564232 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/1460652.pem --> /usr/share/ca-certificates/1460652.pem (1338 bytes)
	I1127 23:54:04.592627 1525568 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1127 23:54:04.613607 1525568 ssh_runner.go:195] Run: openssl version
	I1127 23:54:04.621940 1525568 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1127 23:54:04.622348 1525568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1127 23:54:04.634487 1525568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:54:04.638861 1525568 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 27 23:31 /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:54:04.639139 1525568 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:31 /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:54:04.639204 1525568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:54:04.647491 1525568 command_runner.go:130] > b5213941
	I1127 23:54:04.647972 1525568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1127 23:54:04.659741 1525568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1460652.pem && ln -fs /usr/share/ca-certificates/1460652.pem /etc/ssl/certs/1460652.pem"
	I1127 23:54:04.671250 1525568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1460652.pem
	I1127 23:54:04.675781 1525568 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 27 23:38 /usr/share/ca-certificates/1460652.pem
	I1127 23:54:04.676039 1525568 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:38 /usr/share/ca-certificates/1460652.pem
	I1127 23:54:04.676100 1525568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1460652.pem
	I1127 23:54:04.684601 1525568 command_runner.go:130] > 51391683
	I1127 23:54:04.685121 1525568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1460652.pem /etc/ssl/certs/51391683.0"
	I1127 23:54:04.696507 1525568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14606522.pem && ln -fs /usr/share/ca-certificates/14606522.pem /etc/ssl/certs/14606522.pem"
	I1127 23:54:04.708199 1525568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14606522.pem
	I1127 23:54:04.712582 1525568 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 27 23:38 /usr/share/ca-certificates/14606522.pem
	I1127 23:54:04.712830 1525568 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:38 /usr/share/ca-certificates/14606522.pem
	I1127 23:54:04.712886 1525568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14606522.pem
	I1127 23:54:04.720988 1525568 command_runner.go:130] > 3ec20f2e
	I1127 23:54:04.721405 1525568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14606522.pem /etc/ssl/certs/3ec20f2e.0"
	I1127 23:54:04.732774 1525568 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1127 23:54:04.736989 1525568 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 23:54:04.737020 1525568 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 23:54:04.737058 1525568 kubeadm.go:404] StartCluster: {Name:multinode-784312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-784312 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:54:04.737142 1525568 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1127 23:54:04.737205 1525568 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1127 23:54:04.780789 1525568 cri.go:89] found id: ""
	I1127 23:54:04.780859 1525568 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1127 23:54:04.791667 1525568 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1127 23:54:04.791691 1525568 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1127 23:54:04.791699 1525568 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1127 23:54:04.791793 1525568 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1127 23:54:04.802374 1525568 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1127 23:54:04.802441 1525568 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1127 23:54:04.812814 1525568 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1127 23:54:04.812841 1525568 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1127 23:54:04.812851 1525568 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1127 23:54:04.812860 1525568 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1127 23:54:04.812888 1525568 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1127 23:54:04.812926 1525568 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1127 23:54:04.867319 1525568 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1127 23:54:04.867355 1525568 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1127 23:54:04.867562 1525568 kubeadm.go:322] [preflight] Running pre-flight checks
	I1127 23:54:04.867585 1525568 command_runner.go:130] > [preflight] Running pre-flight checks
	I1127 23:54:04.911857 1525568 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1127 23:54:04.911930 1525568 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1127 23:54:04.912057 1525568 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1050-aws
	I1127 23:54:04.912091 1525568 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1050-aws
	I1127 23:54:04.912165 1525568 kubeadm.go:322] OS: Linux
	I1127 23:54:04.912188 1525568 command_runner.go:130] > OS: Linux
	I1127 23:54:04.912282 1525568 kubeadm.go:322] CGROUPS_CPU: enabled
	I1127 23:54:04.912308 1525568 command_runner.go:130] > CGROUPS_CPU: enabled
	I1127 23:54:04.912405 1525568 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1127 23:54:04.912427 1525568 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1127 23:54:04.912498 1525568 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1127 23:54:04.912528 1525568 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1127 23:54:04.912613 1525568 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1127 23:54:04.912636 1525568 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1127 23:54:04.912711 1525568 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1127 23:54:04.912734 1525568 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1127 23:54:04.912829 1525568 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1127 23:54:04.912851 1525568 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1127 23:54:04.912932 1525568 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1127 23:54:04.912953 1525568 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1127 23:54:04.913044 1525568 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1127 23:54:04.913067 1525568 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1127 23:54:04.913140 1525568 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1127 23:54:04.913162 1525568 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1127 23:54:04.994494 1525568 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1127 23:54:04.994562 1525568 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1127 23:54:04.994685 1525568 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1127 23:54:04.994708 1525568 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1127 23:54:04.994816 1525568 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1127 23:54:04.994839 1525568 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1127 23:54:05.278241 1525568 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1127 23:54:05.282872 1525568 out.go:204]   - Generating certificates and keys ...
	I1127 23:54:05.278327 1525568 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1127 23:54:05.282986 1525568 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1127 23:54:05.283003 1525568 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1127 23:54:05.283068 1525568 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1127 23:54:05.283077 1525568 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1127 23:54:05.525333 1525568 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1127 23:54:05.525399 1525568 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1127 23:54:06.265033 1525568 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1127 23:54:06.265057 1525568 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1127 23:54:06.771242 1525568 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1127 23:54:06.771269 1525568 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1127 23:54:07.204300 1525568 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1127 23:54:07.204328 1525568 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1127 23:54:07.504906 1525568 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1127 23:54:07.504932 1525568 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1127 23:54:07.505391 1525568 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-784312] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1127 23:54:07.505406 1525568 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-784312] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1127 23:54:07.962598 1525568 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1127 23:54:07.962626 1525568 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1127 23:54:07.962897 1525568 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-784312] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1127 23:54:07.962913 1525568 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-784312] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1127 23:54:08.334879 1525568 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1127 23:54:08.334909 1525568 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1127 23:54:09.025653 1525568 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1127 23:54:09.025679 1525568 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1127 23:54:09.533763 1525568 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1127 23:54:09.533789 1525568 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1127 23:54:09.534166 1525568 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1127 23:54:09.534181 1525568 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1127 23:54:10.482769 1525568 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1127 23:54:10.482805 1525568 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1127 23:54:11.247520 1525568 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1127 23:54:11.247546 1525568 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1127 23:54:11.518233 1525568 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1127 23:54:11.518273 1525568 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1127 23:54:12.246929 1525568 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1127 23:54:12.246953 1525568 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1127 23:54:12.247728 1525568 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1127 23:54:12.247749 1525568 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1127 23:54:12.252280 1525568 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1127 23:54:12.254770 1525568 out.go:204]   - Booting up control plane ...
	I1127 23:54:12.252418 1525568 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1127 23:54:12.254896 1525568 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1127 23:54:12.254907 1525568 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1127 23:54:12.254975 1525568 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1127 23:54:12.254981 1525568 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1127 23:54:12.255551 1525568 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1127 23:54:12.255568 1525568 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1127 23:54:12.266374 1525568 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1127 23:54:12.266406 1525568 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1127 23:54:12.267660 1525568 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1127 23:54:12.267681 1525568 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1127 23:54:12.267719 1525568 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1127 23:54:12.267728 1525568 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1127 23:54:12.372630 1525568 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1127 23:54:12.372662 1525568 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1127 23:54:20.876842 1525568 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503710 seconds
	I1127 23:54:20.876867 1525568 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.503710 seconds
	I1127 23:54:20.876965 1525568 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1127 23:54:20.876971 1525568 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1127 23:54:20.919831 1525568 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1127 23:54:20.919858 1525568 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1127 23:54:21.454166 1525568 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1127 23:54:21.454191 1525568 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1127 23:54:21.454362 1525568 kubeadm.go:322] [mark-control-plane] Marking the node multinode-784312 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1127 23:54:21.454368 1525568 command_runner.go:130] > [mark-control-plane] Marking the node multinode-784312 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1127 23:54:21.965789 1525568 kubeadm.go:322] [bootstrap-token] Using token: 1bnjt7.4rthwkat45weieri
	I1127 23:54:21.967458 1525568 out.go:204]   - Configuring RBAC rules ...
	I1127 23:54:21.965912 1525568 command_runner.go:130] > [bootstrap-token] Using token: 1bnjt7.4rthwkat45weieri
	I1127 23:54:21.967589 1525568 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1127 23:54:21.967600 1525568 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1127 23:54:21.973742 1525568 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1127 23:54:21.973765 1525568 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1127 23:54:21.982012 1525568 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1127 23:54:21.982038 1525568 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1127 23:54:21.985806 1525568 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1127 23:54:21.985829 1525568 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1127 23:54:21.989624 1525568 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1127 23:54:21.989650 1525568 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1127 23:54:21.994281 1525568 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1127 23:54:21.994304 1525568 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1127 23:54:22.016505 1525568 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1127 23:54:22.016527 1525568 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1127 23:54:22.264095 1525568 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1127 23:54:22.264118 1525568 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1127 23:54:22.410486 1525568 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1127 23:54:22.410508 1525568 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1127 23:54:22.410515 1525568 kubeadm.go:322] 
	I1127 23:54:22.410571 1525568 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1127 23:54:22.410576 1525568 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1127 23:54:22.410580 1525568 kubeadm.go:322] 
	I1127 23:54:22.410656 1525568 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1127 23:54:22.410662 1525568 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1127 23:54:22.410666 1525568 kubeadm.go:322] 
	I1127 23:54:22.410690 1525568 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1127 23:54:22.410695 1525568 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1127 23:54:22.410749 1525568 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1127 23:54:22.410754 1525568 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1127 23:54:22.410801 1525568 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1127 23:54:22.410806 1525568 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1127 23:54:22.410810 1525568 kubeadm.go:322] 
	I1127 23:54:22.410860 1525568 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1127 23:54:22.410866 1525568 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1127 23:54:22.410870 1525568 kubeadm.go:322] 
	I1127 23:54:22.410914 1525568 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1127 23:54:22.410919 1525568 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1127 23:54:22.410923 1525568 kubeadm.go:322] 
	I1127 23:54:22.410972 1525568 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1127 23:54:22.410976 1525568 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1127 23:54:22.411046 1525568 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1127 23:54:22.411051 1525568 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1127 23:54:22.411115 1525568 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1127 23:54:22.411119 1525568 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1127 23:54:22.411131 1525568 kubeadm.go:322] 
	I1127 23:54:22.411210 1525568 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1127 23:54:22.411215 1525568 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1127 23:54:22.411286 1525568 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1127 23:54:22.411291 1525568 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1127 23:54:22.411295 1525568 kubeadm.go:322] 
	I1127 23:54:22.411373 1525568 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 1bnjt7.4rthwkat45weieri \
	I1127 23:54:22.411378 1525568 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 1bnjt7.4rthwkat45weieri \
	I1127 23:54:22.411473 1525568 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4bf7580b26a5e006cb8545a36d546acc708a5c0d8ea7cd28dd99f58e9fcb6509 \
	I1127 23:54:22.411478 1525568 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:4bf7580b26a5e006cb8545a36d546acc708a5c0d8ea7cd28dd99f58e9fcb6509 \
	I1127 23:54:22.411498 1525568 kubeadm.go:322] 	--control-plane 
	I1127 23:54:22.411502 1525568 command_runner.go:130] > 	--control-plane 
	I1127 23:54:22.411506 1525568 kubeadm.go:322] 
	I1127 23:54:22.411586 1525568 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1127 23:54:22.411596 1525568 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1127 23:54:22.411600 1525568 kubeadm.go:322] 
	I1127 23:54:22.411677 1525568 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1bnjt7.4rthwkat45weieri \
	I1127 23:54:22.411682 1525568 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 1bnjt7.4rthwkat45weieri \
	I1127 23:54:22.411776 1525568 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4bf7580b26a5e006cb8545a36d546acc708a5c0d8ea7cd28dd99f58e9fcb6509 
	I1127 23:54:22.411781 1525568 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:4bf7580b26a5e006cb8545a36d546acc708a5c0d8ea7cd28dd99f58e9fcb6509 
	I1127 23:54:22.417574 1525568 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1050-aws\n", err: exit status 1
	I1127 23:54:22.417596 1525568 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1050-aws\n", err: exit status 1
	I1127 23:54:22.417694 1525568 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1127 23:54:22.417701 1525568 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1127 23:54:22.417713 1525568 cni.go:84] Creating CNI manager for ""
	I1127 23:54:22.417719 1525568 cni.go:136] 1 nodes found, recommending kindnet
	I1127 23:54:22.419711 1525568 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1127 23:54:22.421739 1525568 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1127 23:54:22.436982 1525568 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1127 23:54:22.437004 1525568 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1127 23:54:22.437012 1525568 command_runner.go:130] > Device: 36h/54d	Inode: 5712268     Links: 1
	I1127 23:54:22.437023 1525568 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1127 23:54:22.437030 1525568 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1127 23:54:22.437036 1525568 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1127 23:54:22.437042 1525568 command_runner.go:130] > Change: 2023-11-27 23:30:32.626003656 +0000
	I1127 23:54:22.437050 1525568 command_runner.go:130] >  Birth: 2023-11-27 23:30:32.582004044 +0000
	I1127 23:54:22.441018 1525568 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1127 23:54:22.441035 1525568 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1127 23:54:22.501450 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1127 23:54:23.300745 1525568 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1127 23:54:23.307220 1525568 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1127 23:54:23.315964 1525568 command_runner.go:130] > serviceaccount/kindnet created
	I1127 23:54:23.330209 1525568 command_runner.go:130] > daemonset.apps/kindnet created
	I1127 23:54:23.336240 1525568 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1127 23:54:23.336294 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:23.336361 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=multinode-784312 minikube.k8s.io/updated_at=2023_11_27T23_54_23_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:23.488318 1525568 command_runner.go:130] > node/multinode-784312 labeled
	I1127 23:54:23.492122 1525568 command_runner.go:130] > -16
	I1127 23:54:23.492164 1525568 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1127 23:54:23.492188 1525568 ops.go:34] apiserver oom_adj: -16
	I1127 23:54:23.492255 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:23.611167 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:23.611262 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:23.704464 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:24.205180 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:24.292215 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:24.705376 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:24.795612 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:25.205298 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:25.300644 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:25.705243 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:25.792765 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:26.204637 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:26.301454 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:26.704884 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:26.796113 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:27.204669 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:27.294446 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:27.705018 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:27.803456 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:28.205601 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:28.300180 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:28.705569 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:28.794006 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:29.205677 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:29.307374 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:29.704953 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:29.790439 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:30.204638 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:30.297774 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:30.705379 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:30.790890 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:31.205671 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:31.297991 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:31.704904 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:31.793057 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:32.205458 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:32.307169 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:32.705089 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:32.804978 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:33.205538 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:33.303370 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:33.705103 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:33.803111 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:34.205611 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:34.300568 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:34.704733 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:34.801696 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:35.204749 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:35.301575 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:35.704691 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:35.801946 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:36.204799 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:36.324891 1525568 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:36.705562 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:36.821008 1525568 command_runner.go:130] > NAME      SECRETS   AGE
	I1127 23:54:36.821031 1525568 command_runner.go:130] > default   0         0s
	I1127 23:54:36.824624 1525568 kubeadm.go:1081] duration metric: took 13.488384965s to wait for elevateKubeSystemPrivileges.
	I1127 23:54:36.824658 1525568 kubeadm.go:406] StartCluster complete in 32.087603273s
	I1127 23:54:36.824676 1525568 settings.go:142] acquiring lock: {Name:mk2effde19f5a08dd61e438cec70b0751f0f2f09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:54:36.824749 1525568 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-1455288/kubeconfig
	I1127 23:54:36.825413 1525568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/kubeconfig: {Name:mk024e2b9ecd216772e0b17d0d1d16e859027716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:54:36.825961 1525568 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17206-1455288/kubeconfig
	I1127 23:54:36.826224 1525568 kapi.go:59] client config for multinode-784312: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/client.key", CAFile:"/home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:54:36.827175 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1127 23:54:36.827195 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:36.827205 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:36.827212 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:36.827719 1525568 config.go:182] Loaded profile config "multinode-784312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:54:36.827783 1525568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1127 23:54:36.827875 1525568 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1127 23:54:36.827944 1525568 addons.go:69] Setting storage-provisioner=true in profile "multinode-784312"
	I1127 23:54:36.827961 1525568 addons.go:231] Setting addon storage-provisioner=true in "multinode-784312"
	I1127 23:54:36.828018 1525568 host.go:66] Checking if "multinode-784312" exists ...
	I1127 23:54:36.828466 1525568 cli_runner.go:164] Run: docker container inspect multinode-784312 --format={{.State.Status}}
	I1127 23:54:36.828964 1525568 cert_rotation.go:137] Starting client certificate rotation controller
	I1127 23:54:36.829020 1525568 addons.go:69] Setting default-storageclass=true in profile "multinode-784312"
	I1127 23:54:36.829036 1525568 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-784312"
	I1127 23:54:36.829283 1525568 cli_runner.go:164] Run: docker container inspect multinode-784312 --format={{.State.Status}}
	I1127 23:54:36.852887 1525568 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I1127 23:54:36.852953 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:36.852975 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:36 GMT
	I1127 23:54:36.852993 1525568 round_trippers.go:580]     Audit-Id: 81a975f8-04c4-447b-8407-2e60fb97bd4b
	I1127 23:54:36.853024 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:36.853046 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:36.853064 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:36.853083 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:36.853102 1525568 round_trippers.go:580]     Content-Length: 291
	I1127 23:54:36.853155 1525568 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a3cec32e-d838-4f12-bc00-b18b4198854e","resourceVersion":"354","creationTimestamp":"2023-11-27T23:54:22Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1127 23:54:36.854601 1525568 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a3cec32e-d838-4f12-bc00-b18b4198854e","resourceVersion":"354","creationTimestamp":"2023-11-27T23:54:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1127 23:54:36.854696 1525568 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1127 23:54:36.854737 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:36.854764 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:36.854784 1525568 round_trippers.go:473]     Content-Type: application/json
	I1127 23:54:36.854819 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:36.875746 1525568 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17206-1455288/kubeconfig
	I1127 23:54:36.876012 1525568 kapi.go:59] client config for multinode-784312: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/client.key", CAFile:"/home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:54:36.876287 1525568 addons.go:231] Setting addon default-storageclass=true in "multinode-784312"
	I1127 23:54:36.876323 1525568 host.go:66] Checking if "multinode-784312" exists ...
	I1127 23:54:36.876766 1525568 cli_runner.go:164] Run: docker container inspect multinode-784312 --format={{.State.Status}}
	I1127 23:54:36.884518 1525568 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I1127 23:54:36.884544 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:36.884558 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:36.884565 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:36.884572 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:36.884578 1525568 round_trippers.go:580]     Content-Length: 291
	I1127 23:54:36.884584 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:36 GMT
	I1127 23:54:36.884594 1525568 round_trippers.go:580]     Audit-Id: 20da8a9a-a969-4633-a005-a1270c3cccca
	I1127 23:54:36.884600 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:36.884625 1525568 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a3cec32e-d838-4f12-bc00-b18b4198854e","resourceVersion":"355","creationTimestamp":"2023-11-27T23:54:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1127 23:54:36.884772 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1127 23:54:36.884785 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:36.884793 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:36.884800 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:36.907827 1525568 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1127 23:54:36.907847 1525568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1127 23:54:36.907912 1525568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-784312
	I1127 23:54:36.912331 1525568 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I1127 23:54:36.912356 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:36.912364 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:36 GMT
	I1127 23:54:36.912371 1525568 round_trippers.go:580]     Audit-Id: faff9cbf-0d43-4603-8c54-f057187e56e8
	I1127 23:54:36.912378 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:36.912384 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:36.912390 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:36.912396 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:36.912403 1525568 round_trippers.go:580]     Content-Length: 291
	I1127 23:54:36.912425 1525568 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a3cec32e-d838-4f12-bc00-b18b4198854e","resourceVersion":"355","creationTimestamp":"2023-11-27T23:54:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1127 23:54:36.912515 1525568 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-784312" context rescaled to 1 replicas
	I1127 23:54:36.912543 1525568 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1127 23:54:36.914596 1525568 out.go:177] * Verifying Kubernetes components...
	I1127 23:54:36.916751 1525568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:54:36.929678 1525568 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:54:36.931417 1525568 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 23:54:36.931438 1525568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1127 23:54:36.931507 1525568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-784312
	I1127 23:54:36.954294 1525568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34144 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/multinode-784312/id_rsa Username:docker}
	I1127 23:54:36.979628 1525568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34144 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/multinode-784312/id_rsa Username:docker}
	I1127 23:54:37.072584 1525568 command_runner.go:130] > apiVersion: v1
	I1127 23:54:37.072644 1525568 command_runner.go:130] > data:
	I1127 23:54:37.072664 1525568 command_runner.go:130] >   Corefile: |
	I1127 23:54:37.072682 1525568 command_runner.go:130] >     .:53 {
	I1127 23:54:37.072700 1525568 command_runner.go:130] >         errors
	I1127 23:54:37.072734 1525568 command_runner.go:130] >         health {
	I1127 23:54:37.072757 1525568 command_runner.go:130] >            lameduck 5s
	I1127 23:54:37.072774 1525568 command_runner.go:130] >         }
	I1127 23:54:37.072790 1525568 command_runner.go:130] >         ready
	I1127 23:54:37.072810 1525568 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1127 23:54:37.072835 1525568 command_runner.go:130] >            pods insecure
	I1127 23:54:37.072858 1525568 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1127 23:54:37.072876 1525568 command_runner.go:130] >            ttl 30
	I1127 23:54:37.072893 1525568 command_runner.go:130] >         }
	I1127 23:54:37.072909 1525568 command_runner.go:130] >         prometheus :9153
	I1127 23:54:37.072934 1525568 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1127 23:54:37.072955 1525568 command_runner.go:130] >            max_concurrent 1000
	I1127 23:54:37.072974 1525568 command_runner.go:130] >         }
	I1127 23:54:37.072990 1525568 command_runner.go:130] >         cache 30
	I1127 23:54:37.073008 1525568 command_runner.go:130] >         loop
	I1127 23:54:37.073031 1525568 command_runner.go:130] >         reload
	I1127 23:54:37.073051 1525568 command_runner.go:130] >         loadbalance
	I1127 23:54:37.073068 1525568 command_runner.go:130] >     }
	I1127 23:54:37.073085 1525568 command_runner.go:130] > kind: ConfigMap
	I1127 23:54:37.073102 1525568 command_runner.go:130] > metadata:
	I1127 23:54:37.073129 1525568 command_runner.go:130] >   creationTimestamp: "2023-11-27T23:54:22Z"
	I1127 23:54:37.073149 1525568 command_runner.go:130] >   name: coredns
	I1127 23:54:37.073167 1525568 command_runner.go:130] >   namespace: kube-system
	I1127 23:54:37.073184 1525568 command_runner.go:130] >   resourceVersion: "217"
	I1127 23:54:37.073201 1525568 command_runner.go:130] >   uid: 135f1e4f-cb75-4703-8734-e707a5a2d7aa
	I1127 23:54:37.076536 1525568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1127 23:54:37.076970 1525568 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17206-1455288/kubeconfig
	I1127 23:54:37.077229 1525568 kapi.go:59] client config for multinode-784312: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/client.key", CAFile:"/home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:54:37.077487 1525568 node_ready.go:35] waiting up to 6m0s for node "multinode-784312" to be "Ready" ...
	I1127 23:54:37.077583 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:37.077589 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:37.077598 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:37.077605 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:37.086342 1525568 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1127 23:54:37.086360 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:37.086368 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:37.086375 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:37.086381 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:37.086387 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:37.086393 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:37 GMT
	I1127 23:54:37.086410 1525568 round_trippers.go:580]     Audit-Id: bd74fdc3-e82a-4c6f-92d5-38af2e85bafb
	I1127 23:54:37.086913 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:37.087680 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:37.087722 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:37.087745 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:37.087764 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:37.098286 1525568 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1127 23:54:37.098354 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:37.098375 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:37.098395 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:37.098412 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:37.098446 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:37.098465 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:37 GMT
	I1127 23:54:37.098483 1525568 round_trippers.go:580]     Audit-Id: 9cf88ec7-6194-4a44-9da8-b8d72717b3b9
	I1127 23:54:37.102459 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:37.127013 1525568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 23:54:37.178556 1525568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1127 23:54:37.603813 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:37.603875 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:37.603906 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:37.603925 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:37.611098 1525568 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1127 23:54:37.611180 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:37.611203 1525568 round_trippers.go:580]     Audit-Id: f620b9cc-1432-4114-8c88-f4edf0672700
	I1127 23:54:37.611220 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:37.611247 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:37.611270 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:37.611288 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:37.611307 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:37 GMT
	I1127 23:54:37.615486 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:37.726578 1525568 command_runner.go:130] > configmap/coredns replaced
	I1127 23:54:37.728316 1525568 start.go:926] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1127 23:54:38.024717 1525568 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1127 23:54:38.024753 1525568 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1127 23:54:38.024763 1525568 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1127 23:54:38.024772 1525568 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1127 23:54:38.024779 1525568 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1127 23:54:38.024785 1525568 command_runner.go:130] > pod/storage-provisioner created
	I1127 23:54:38.024839 1525568 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1127 23:54:38.024952 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1127 23:54:38.024964 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:38.024982 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:38.024994 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:38.052699 1525568 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I1127 23:54:38.052740 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:38.052749 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:38.052756 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:38.052762 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:38.052768 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:38.052775 1525568 round_trippers.go:580]     Content-Length: 1273
	I1127 23:54:38.052789 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:38 GMT
	I1127 23:54:38.052797 1525568 round_trippers.go:580]     Audit-Id: 17faac65-c4c8-484f-b973-c36978292613
	I1127 23:54:38.052879 1525568 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"380"},"items":[{"metadata":{"name":"standard","uid":"9b7ad5ee-3a9a-402c-af06-c9c6781cd53b","resourceVersion":"370","creationTimestamp":"2023-11-27T23:54:37Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-27T23:54:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1127 23:54:38.053491 1525568 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"9b7ad5ee-3a9a-402c-af06-c9c6781cd53b","resourceVersion":"370","creationTimestamp":"2023-11-27T23:54:37Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-27T23:54:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1127 23:54:38.053600 1525568 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1127 23:54:38.053610 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:38.053618 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:38.053630 1525568 round_trippers.go:473]     Content-Type: application/json
	I1127 23:54:38.053636 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:38.062636 1525568 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1127 23:54:38.062663 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:38.062672 1525568 round_trippers.go:580]     Audit-Id: e40708ba-e088-493b-99ee-5bd730b672f6
	I1127 23:54:38.062685 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:38.062692 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:38.062699 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:38.062705 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:38.062766 1525568 round_trippers.go:580]     Content-Length: 1220
	I1127 23:54:38.062776 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:38 GMT
	I1127 23:54:38.062810 1525568 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"9b7ad5ee-3a9a-402c-af06-c9c6781cd53b","resourceVersion":"370","creationTimestamp":"2023-11-27T23:54:37Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-27T23:54:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1127 23:54:38.067489 1525568 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1127 23:54:38.071591 1525568 addons.go:502] enable addons completed in 1.24370404s: enabled=[storage-provisioner default-storageclass]
	I1127 23:54:38.103575 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:38.103603 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:38.103613 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:38.103621 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:38.106234 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:38.106295 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:38.106319 1525568 round_trippers.go:580]     Audit-Id: 903cbd73-3cfd-4425-b7d3-9952de63eed3
	I1127 23:54:38.106346 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:38.106366 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:38.106386 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:38.106405 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:38.106430 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:38 GMT
	I1127 23:54:38.106586 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:38.603816 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:38.603839 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:38.603848 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:38.603856 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:38.606440 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:38.606467 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:38.606476 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:38 GMT
	I1127 23:54:38.606482 1525568 round_trippers.go:580]     Audit-Id: 242b5597-d26f-41f1-9a9d-38e2a98486c4
	I1127 23:54:38.606489 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:38.606495 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:38.606502 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:38.606513 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:38.606826 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:39.103198 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:39.103234 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:39.103244 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:39.103254 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:39.105926 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:39.105951 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:39.105960 1525568 round_trippers.go:580]     Audit-Id: 80e1954f-d130-409b-aa07-5ed1126cbfab
	I1127 23:54:39.105968 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:39.105974 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:39.105981 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:39.105987 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:39.105997 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:39 GMT
	I1127 23:54:39.106291 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:39.106812 1525568 node_ready.go:58] node "multinode-784312" has status "Ready":"False"
	I1127 23:54:39.603409 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:39.603434 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:39.603444 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:39.603452 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:39.605847 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:39.605930 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:39.605951 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:39.605970 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:39.605988 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:39.606046 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:39 GMT
	I1127 23:54:39.606068 1525568 round_trippers.go:580]     Audit-Id: 323c92e2-bbe1-432b-9332-65f26c1cb363
	I1127 23:54:39.606095 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:39.606203 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:40.103546 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:40.103573 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:40.103584 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:40.103591 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:40.106373 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:40.106402 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:40.106411 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:40.106428 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:40.106437 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:40 GMT
	I1127 23:54:40.106458 1525568 round_trippers.go:580]     Audit-Id: ce69dcbb-00ed-4c44-9d8c-4acd8a4b0da9
	I1127 23:54:40.106466 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:40.106480 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:40.106670 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:40.603199 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:40.603224 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:40.603233 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:40.603240 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:40.605846 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:40.605925 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:40.605960 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:40.605984 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:40.606002 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:40.606014 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:40.606035 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:40 GMT
	I1127 23:54:40.606042 1525568 round_trippers.go:580]     Audit-Id: 78af4994-5b08-4b6a-82bf-0b354b738c22
	I1127 23:54:40.606162 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:41.103636 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:41.103661 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:41.103671 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:41.103678 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:41.106302 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:41.106381 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:41.106396 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:41.106412 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:41.106422 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:41 GMT
	I1127 23:54:41.106428 1525568 round_trippers.go:580]     Audit-Id: db7d9653-821d-40ec-b3f4-7f9164d76b04
	I1127 23:54:41.106434 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:41.106443 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:41.106560 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:41.106954 1525568 node_ready.go:58] node "multinode-784312" has status "Ready":"False"
	I1127 23:54:41.604109 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:41.604133 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:41.604143 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:41.604151 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:41.606497 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:41.606521 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:41.606529 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:41.606535 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:41.606541 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:41 GMT
	I1127 23:54:41.606548 1525568 round_trippers.go:580]     Audit-Id: 3d58c69e-0579-4302-bc0d-4a08b2a26710
	I1127 23:54:41.606555 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:41.606561 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:41.606773 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:42.103713 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:42.103746 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:42.103756 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:42.103764 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:42.109062 1525568 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1127 23:54:42.109090 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:42.109099 1525568 round_trippers.go:580]     Audit-Id: 2ea7f444-f57b-49af-819b-56fa3506b2c7
	I1127 23:54:42.109106 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:42.109113 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:42.109119 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:42.109126 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:42.109132 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:42 GMT
	I1127 23:54:42.109550 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:42.603157 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:42.603183 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:42.603193 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:42.603200 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:42.605820 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:42.605842 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:42.605905 1525568 round_trippers.go:580]     Audit-Id: 5d14ffb5-6c52-4e52-8b4a-a4a5a411e55f
	I1127 23:54:42.605918 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:42.605925 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:42.605936 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:42.605943 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:42.605952 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:42 GMT
	I1127 23:54:42.606123 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:43.103215 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:43.103239 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:43.103248 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:43.103256 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:43.105965 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:43.105992 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:43.106000 1525568 round_trippers.go:580]     Audit-Id: 9a068c16-c1f0-4bd9-b9df-5afd23572177
	I1127 23:54:43.106007 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:43.106013 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:43.106028 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:43.106039 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:43.106046 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:43 GMT
	I1127 23:54:43.106210 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:43.603173 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:43.603199 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:43.603210 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:43.603218 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:43.605831 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:43.605869 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:43.605879 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:43.605887 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:43.605893 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:43 GMT
	I1127 23:54:43.605899 1525568 round_trippers.go:580]     Audit-Id: 615e17a6-a7a4-4a33-a644-b8a49161a96a
	I1127 23:54:43.605905 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:43.605912 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:43.606009 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:43.606454 1525568 node_ready.go:58] node "multinode-784312" has status "Ready":"False"
	I1127 23:54:44.103275 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:44.103299 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:44.103309 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:44.103316 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:44.105713 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:44.105738 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:44.105746 1525568 round_trippers.go:580]     Audit-Id: 69e798cd-fbcc-4219-95b7-b3628af3e2be
	I1127 23:54:44.105753 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:44.105759 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:44.105765 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:44.105772 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:44.105778 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:44 GMT
	I1127 23:54:44.105913 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:44.604030 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:44.604055 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:44.604065 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:44.604073 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:44.606625 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:44.606650 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:44.606659 1525568 round_trippers.go:580]     Audit-Id: eb67c7be-911a-4825-a2b3-428ef8418880
	I1127 23:54:44.606666 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:44.606672 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:44.606702 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:44.606708 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:44.606715 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:44 GMT
	I1127 23:54:44.606823 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:45.103330 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:45.103360 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:45.103371 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:45.103378 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:45.108873 1525568 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1127 23:54:45.108899 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:45.108916 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:45 GMT
	I1127 23:54:45.108923 1525568 round_trippers.go:580]     Audit-Id: 905a810a-265e-47cc-8be0-00b3f5b6f439
	I1127 23:54:45.108929 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:45.108940 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:45.108949 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:45.108957 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:45.109106 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:45.603227 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:45.603252 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:45.603262 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:45.603270 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:45.605932 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:45.605992 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:45.606001 1525568 round_trippers.go:580]     Audit-Id: 42ccb68b-0d74-4134-8b28-405fc4ec61b2
	I1127 23:54:45.606009 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:45.606015 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:45.606030 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:45.606037 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:45.606070 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:45 GMT
	I1127 23:54:45.606192 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:45.606620 1525568 node_ready.go:58] node "multinode-784312" has status "Ready":"False"
	I1127 23:54:46.103664 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:46.103690 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:46.103700 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:46.103707 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:46.106443 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:46.106472 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:46.106481 1525568 round_trippers.go:580]     Audit-Id: 53f31d7f-414c-4921-a083-8e3338b3a837
	I1127 23:54:46.106488 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:46.106494 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:46.106500 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:46.106511 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:46.106542 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:46 GMT
	I1127 23:54:46.106717 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:46.603787 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:46.603813 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:46.603822 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:46.603830 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:46.606237 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:46.606260 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:46.606269 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:46 GMT
	I1127 23:54:46.606276 1525568 round_trippers.go:580]     Audit-Id: 55c17a86-b955-4e5a-bf24-714dd3df23a6
	I1127 23:54:46.606282 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:46.606288 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:46.606298 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:46.606305 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:46.606404 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:47.103638 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:47.103665 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:47.103675 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:47.103682 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:47.106218 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:47.106244 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:47.106253 1525568 round_trippers.go:580]     Audit-Id: 7a9c4809-06b2-4975-b8bb-19d79808f835
	I1127 23:54:47.106259 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:47.106267 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:47.106273 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:47.106280 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:47.106287 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:47 GMT
	I1127 23:54:47.106418 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:47.603260 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:47.603285 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:47.603296 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:47.603304 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:47.605764 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:47.605782 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:47.605791 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:47.605797 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:47.605803 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:47 GMT
	I1127 23:54:47.605810 1525568 round_trippers.go:580]     Audit-Id: 9cab6e38-49e0-4369-82cb-115558af93fe
	I1127 23:54:47.605816 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:47.605823 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:47.605956 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:48.104173 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:48.104201 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:48.104223 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:48.104230 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:48.107131 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:48.107157 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:48.107166 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:48 GMT
	I1127 23:54:48.107173 1525568 round_trippers.go:580]     Audit-Id: 9c477fd7-79f2-4447-87c9-2ae3120a5a88
	I1127 23:54:48.107180 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:48.107186 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:48.107192 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:48.107198 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:48.107307 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:48.107699 1525568 node_ready.go:58] node "multinode-784312" has status "Ready":"False"
	I1127 23:54:48.603160 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:48.603183 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:48.603194 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:48.603201 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:48.605629 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:48.605648 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:48.605656 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:48 GMT
	I1127 23:54:48.605663 1525568 round_trippers.go:580]     Audit-Id: da533d41-35d5-49e8-8077-fd5853ba50a3
	I1127 23:54:48.605672 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:48.605679 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:48.605685 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:48.605691 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:48.605782 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:49.103185 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:49.103211 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:49.103221 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:49.103228 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:49.105754 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:49.105774 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:49.105782 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:49.105789 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:49.105795 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:49.105801 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:49 GMT
	I1127 23:54:49.105807 1525568 round_trippers.go:580]     Audit-Id: 7d0cebc9-1101-4fde-8f77-c4955305bc34
	I1127 23:54:49.105814 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:49.105948 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:49.603109 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:49.603134 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:49.603144 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:49.603151 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:49.605650 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:49.605671 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:49.605678 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:49 GMT
	I1127 23:54:49.605685 1525568 round_trippers.go:580]     Audit-Id: ca2613ca-e40a-47d2-95a1-366dafbf0f87
	I1127 23:54:49.605692 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:49.605698 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:49.605704 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:49.605710 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:49.605831 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:50.103660 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:50.103692 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:50.103703 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:50.103712 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:50.106586 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:50.106611 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:50.106621 1525568 round_trippers.go:580]     Audit-Id: c85c0bd2-f334-45eb-9555-1e587fe10bbf
	I1127 23:54:50.106628 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:50.106634 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:50.106640 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:50.106647 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:50.106654 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:50 GMT
	I1127 23:54:50.107023 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:50.604101 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:50.604125 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:50.604135 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:50.604142 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:50.606986 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:50.607013 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:50.607024 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:50 GMT
	I1127 23:54:50.607030 1525568 round_trippers.go:580]     Audit-Id: e3d697f9-117b-4d03-8553-30f19366b201
	I1127 23:54:50.607036 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:50.607043 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:50.607049 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:50.607057 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:50.607300 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:50.607698 1525568 node_ready.go:58] node "multinode-784312" has status "Ready":"False"
	I1127 23:54:51.104034 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:51.104061 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:51.104072 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:51.104080 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:51.106887 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:51.106914 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:51.106924 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:51 GMT
	I1127 23:54:51.106931 1525568 round_trippers.go:580]     Audit-Id: 7223078c-cb8e-4c8d-a450-1900f304221d
	I1127 23:54:51.106944 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:51.106952 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:51.106958 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:51.106965 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:51.107285 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:51.603994 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:51.604023 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:51.604032 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:51.604040 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:51.606511 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:51.606534 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:51.606543 1525568 round_trippers.go:580]     Audit-Id: 623eae62-8c62-4a10-aad2-30a2e7ab3710
	I1127 23:54:51.606549 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:51.606556 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:51.606562 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:51.606573 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:51.606588 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:51 GMT
	I1127 23:54:51.606932 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:52.103527 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:52.103552 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:52.103561 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:52.103569 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:52.106187 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:52.106208 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:52.106217 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:52.106224 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:52 GMT
	I1127 23:54:52.106231 1525568 round_trippers.go:580]     Audit-Id: f45ffaff-3fae-4380-ac30-7190af37fe80
	I1127 23:54:52.106238 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:52.106244 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:52.106252 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:52.106420 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:52.604092 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:52.604115 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:52.604126 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:52.604133 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:52.606832 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:52.606858 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:52.606866 1525568 round_trippers.go:580]     Audit-Id: 2d4718f6-96b2-473a-a8d4-fc3b82c2d1ce
	I1127 23:54:52.606873 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:52.606880 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:52.606887 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:52.606893 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:52.606900 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:52 GMT
	I1127 23:54:52.607106 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:53.103267 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:53.103290 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:53.103301 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:53.103309 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:53.105834 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:53.105873 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:53.105882 1525568 round_trippers.go:580]     Audit-Id: e53a614e-4f52-432f-bac1-962c34b7898a
	I1127 23:54:53.105889 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:53.105895 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:53.105901 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:53.105907 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:53.105913 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:53 GMT
	I1127 23:54:53.106054 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:53.106440 1525568 node_ready.go:58] node "multinode-784312" has status "Ready":"False"
	I1127 23:54:53.603718 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:53.603744 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:53.603754 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:53.603762 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:53.606326 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:53.606353 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:53.606362 1525568 round_trippers.go:580]     Audit-Id: 5d63b4ca-ca8b-4f28-ba33-25a12ffec475
	I1127 23:54:53.606369 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:53.606375 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:53.606381 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:53.606388 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:53.606394 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:53 GMT
	I1127 23:54:53.606517 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:54.103670 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:54.103699 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:54.103709 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:54.103716 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:54.106618 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:54.106643 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:54.106652 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:54.106660 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:54.106667 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:54.106673 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:54.106682 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:54 GMT
	I1127 23:54:54.106689 1525568 round_trippers.go:580]     Audit-Id: c8beb52a-4f95-4245-92bc-6563cc8653d8
	I1127 23:54:54.106813 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:54.603161 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:54.603186 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:54.603197 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:54.603210 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:54.605569 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:54.605595 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:54.605604 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:54.605611 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:54.605618 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:54.605630 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:54.605637 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:54 GMT
	I1127 23:54:54.605643 1525568 round_trippers.go:580]     Audit-Id: 61f897ba-6653-451a-8611-40ced5aa11f4
	I1127 23:54:54.606053 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:55.103208 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:55.103233 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:55.103244 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:55.103251 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:55.105778 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:55.105801 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:55.105809 1525568 round_trippers.go:580]     Audit-Id: 91e8cbdd-d163-4f50-a2d6-045a7d39f8c2
	I1127 23:54:55.105816 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:55.105822 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:55.105828 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:55.105835 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:55.105842 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:55 GMT
	I1127 23:54:55.105998 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:55.603135 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:55.603160 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:55.603170 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:55.603178 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:55.605750 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:55.605781 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:55.605790 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:55.605797 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:55.605804 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:55 GMT
	I1127 23:54:55.605810 1525568 round_trippers.go:580]     Audit-Id: 30273df5-aca9-4fde-899a-b2d3af72dd5c
	I1127 23:54:55.605820 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:55.605835 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:55.606174 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:55.606568 1525568 node_ready.go:58] node "multinode-784312" has status "Ready":"False"
	I1127 23:54:56.103834 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:56.103859 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:56.103869 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:56.103877 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:56.106405 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:56.106428 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:56.106436 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:56.106443 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:56.106449 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:56 GMT
	I1127 23:54:56.106456 1525568 round_trippers.go:580]     Audit-Id: 923b5d01-3071-4024-8b9d-9a84f426fc1b
	I1127 23:54:56.106462 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:56.106469 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:56.106597 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:56.603081 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:56.603108 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:56.603118 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:56.603125 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:56.605593 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:56.605614 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:56.605623 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:56.605629 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:56.605635 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:56.605641 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:56.605648 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:56 GMT
	I1127 23:54:56.605654 1525568 round_trippers.go:580]     Audit-Id: d17f3478-6672-4790-8428-83671485da58
	I1127 23:54:56.605773 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:57.103236 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:57.103264 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:57.103281 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:57.103289 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:57.106005 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:57.106028 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:57.106037 1525568 round_trippers.go:580]     Audit-Id: aa64aa79-e9e0-4e7d-9970-dbd70b795286
	I1127 23:54:57.106043 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:57.106049 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:57.106055 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:57.106061 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:57.106068 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:57 GMT
	I1127 23:54:57.106196 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:57.603247 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:57.603268 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:57.603278 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:57.603285 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:57.605768 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:57.605794 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:57.605802 1525568 round_trippers.go:580]     Audit-Id: f96373ed-292d-4164-8fcc-9ff0ecaa675d
	I1127 23:54:57.605809 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:57.605816 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:57.605822 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:57.605828 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:57.605835 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:57 GMT
	I1127 23:54:57.606040 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:58.103761 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:58.103788 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:58.103797 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:58.103805 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:58.106303 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:58.106329 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:58.106338 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:58.106345 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:58.106351 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:58 GMT
	I1127 23:54:58.106358 1525568 round_trippers.go:580]     Audit-Id: 7e3d9163-5ea0-4cfd-86c9-4e46f278f8b3
	I1127 23:54:58.106364 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:58.106374 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:58.106556 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:58.106959 1525568 node_ready.go:58] node "multinode-784312" has status "Ready":"False"
	I1127 23:54:58.603731 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:58.603754 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:58.603764 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:58.603771 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:58.606303 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:58.606323 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:58.606332 1525568 round_trippers.go:580]     Audit-Id: 945b7020-878a-401d-92d7-a1899113824a
	I1127 23:54:58.606339 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:58.606345 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:58.606352 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:58.606359 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:58.606365 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:58 GMT
	I1127 23:54:58.606532 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:59.103085 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:59.103108 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:59.103118 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:59.103134 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:59.105532 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:59.105553 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:59.105561 1525568 round_trippers.go:580]     Audit-Id: 0396617a-4058-4403-8d70-bbf0402f78e0
	I1127 23:54:59.105568 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:59.105574 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:59.105580 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:59.105586 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:59.105592 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:59 GMT
	I1127 23:54:59.105690 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:54:59.603849 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:54:59.603873 1525568 round_trippers.go:469] Request Headers:
	I1127 23:54:59.603882 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:54:59.603891 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:54:59.606338 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:54:59.606362 1525568 round_trippers.go:577] Response Headers:
	I1127 23:54:59.606371 1525568 round_trippers.go:580]     Audit-Id: 9f00f8be-e166-433d-a660-f685f6cc4316
	I1127 23:54:59.606377 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:54:59.606385 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:54:59.606391 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:54:59.606401 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:54:59.606407 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:54:59 GMT
	I1127 23:54:59.606567 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:55:00.103229 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:00.103257 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:00.103269 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:00.103277 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:00.106822 1525568 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:00.106852 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:00.106863 1525568 round_trippers.go:580]     Audit-Id: 36dd30c4-7405-4a7c-98fd-cae8337dda98
	I1127 23:55:00.106870 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:00.106876 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:00.106882 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:00.106889 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:00.106896 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:00 GMT
	I1127 23:55:00.107031 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:55:00.107457 1525568 node_ready.go:58] node "multinode-784312" has status "Ready":"False"
	I1127 23:55:00.603134 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:00.603161 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:00.603172 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:00.603186 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:00.605782 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:00.605803 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:00.605813 1525568 round_trippers.go:580]     Audit-Id: d7e64aa3-340c-4dce-8b15-4c2d8df925b6
	I1127 23:55:00.605820 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:00.605826 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:00.605832 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:00.605839 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:00.605845 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:00 GMT
	I1127 23:55:00.605961 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:55:01.103527 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:01.103553 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:01.103564 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:01.103581 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:01.106128 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:01.106153 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:01.106162 1525568 round_trippers.go:580]     Audit-Id: d8569729-e21d-4175-a36c-08c484a22f1b
	I1127 23:55:01.106169 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:01.106176 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:01.106183 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:01.106189 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:01.106197 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:01 GMT
	I1127 23:55:01.106586 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:55:01.603646 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:01.603670 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:01.603681 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:01.603689 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:01.606184 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:01.606213 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:01.606222 1525568 round_trippers.go:580]     Audit-Id: eb1578a1-f66e-4e49-a624-e5787ad2e78b
	I1127 23:55:01.606229 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:01.606235 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:01.606242 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:01.606248 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:01.606255 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:01 GMT
	I1127 23:55:01.606607 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:55:02.104008 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:02.104035 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:02.104046 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:02.104053 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:02.106638 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:02.106661 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:02.106719 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:02.106726 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:02.106732 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:02.106739 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:02.106745 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:02 GMT
	I1127 23:55:02.106751 1525568 round_trippers.go:580]     Audit-Id: 97774ce6-84cc-4922-ad73-d94b084c8602
	I1127 23:55:02.106872 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:55:02.603166 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:02.603190 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:02.603199 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:02.603210 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:02.605826 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:02.605849 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:02.605876 1525568 round_trippers.go:580]     Audit-Id: 9ea2be02-57dd-46b9-b0c7-3c81bad26155
	I1127 23:55:02.605883 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:02.605889 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:02.605896 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:02.605905 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:02.605912 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:02 GMT
	I1127 23:55:02.606315 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:55:02.606725 1525568 node_ready.go:58] node "multinode-784312" has status "Ready":"False"
	I1127 23:55:03.104070 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:03.104095 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:03.104105 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:03.104113 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:03.106824 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:03.106852 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:03.106860 1525568 round_trippers.go:580]     Audit-Id: 359e2b26-c22e-4631-9b35-571399abd071
	I1127 23:55:03.106867 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:03.106873 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:03.106879 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:03.106886 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:03.106893 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:03 GMT
	I1127 23:55:03.107013 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:55:03.603254 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:03.603277 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:03.603287 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:03.603294 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:03.605651 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:03.605681 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:03.605691 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:03.605698 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:03.605705 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:03.605712 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:03 GMT
	I1127 23:55:03.605721 1525568 round_trippers.go:580]     Audit-Id: b0c306d1-f236-4dec-8291-656a103e6396
	I1127 23:55:03.605734 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:03.606111 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:55:04.103642 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:04.103670 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:04.103680 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:04.103688 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:04.106349 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:04.106415 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:04.106441 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:04.106461 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:04.106491 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:04 GMT
	I1127 23:55:04.106507 1525568 round_trippers.go:580]     Audit-Id: c87e9c4a-b483-49d3-8cc5-e6f4ecf5f068
	I1127 23:55:04.106514 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:04.106520 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:04.106665 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:55:04.603126 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:04.603149 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:04.603159 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:04.603171 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:04.605726 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:04.605757 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:04.605766 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:04 GMT
	I1127 23:55:04.605773 1525568 round_trippers.go:580]     Audit-Id: 9b2ee39f-2a05-4bd2-a58a-5963f753290e
	I1127 23:55:04.605779 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:04.605786 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:04.605792 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:04.605798 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:04.605936 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:55:05.103415 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:05.103445 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:05.103461 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:05.103469 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:05.106059 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:05.106086 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:05.106095 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:05.106103 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:05.106109 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:05.106117 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:05.106123 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:05 GMT
	I1127 23:55:05.106130 1525568 round_trippers.go:580]     Audit-Id: 0bd78b2d-be85-42b5-842d-67b2c38493c9
	I1127 23:55:05.106570 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:55:05.106998 1525568 node_ready.go:58] node "multinode-784312" has status "Ready":"False"
	I1127 23:55:05.603753 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:05.603777 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:05.603788 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:05.603795 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:05.606238 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:05.606303 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:05.606326 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:05.606344 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:05.606376 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:05.606399 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:05 GMT
	I1127 23:55:05.606416 1525568 round_trippers.go:580]     Audit-Id: c3b46f2a-e2fd-42ad-ac9b-baaab15c31cc
	I1127 23:55:05.606432 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:05.606559 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:55:06.103723 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:06.103754 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:06.103766 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:06.103774 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:06.106520 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:06.106548 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:06.106557 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:06.106564 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:06.106570 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:06.106577 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:06.106583 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:06 GMT
	I1127 23:55:06.106590 1525568 round_trippers.go:580]     Audit-Id: dfc8ce6e-2aed-4294-aa10-975976d232cf
	I1127 23:55:06.106748 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:55:06.603930 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:06.603956 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:06.603966 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:06.603973 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:06.606568 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:06.606596 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:06.606604 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:06.606610 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:06.606617 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:06.606623 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:06.606631 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:06 GMT
	I1127 23:55:06.606637 1525568 round_trippers.go:580]     Audit-Id: 019c69e1-7bc2-4017-af91-aaccca86d81c
	I1127 23:55:06.606730 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:55:07.103098 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:07.103132 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:07.103145 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:07.103193 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:07.105792 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:07.105814 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:07.105822 1525568 round_trippers.go:580]     Audit-Id: a299b408-1fe2-499b-855b-20814cfa7af7
	I1127 23:55:07.105829 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:07.105835 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:07.105841 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:07.105848 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:07.105872 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:07 GMT
	I1127 23:55:07.106008 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:55:07.603157 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:07.603201 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:07.603211 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:07.603218 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:07.605789 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:07.605817 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:07.605826 1525568 round_trippers.go:580]     Audit-Id: 68d20131-4cab-4f3d-8610-cf7ef25c18f4
	I1127 23:55:07.605833 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:07.605839 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:07.605846 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:07.605867 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:07.605875 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:07 GMT
	I1127 23:55:07.605998 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:55:07.606422 1525568 node_ready.go:58] node "multinode-784312" has status "Ready":"False"
	I1127 23:55:08.103189 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:08.103218 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:08.103231 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:08.103239 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:08.105774 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:08.105800 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:08.105810 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:08.105817 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:08.105823 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:08.105829 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:08 GMT
	I1127 23:55:08.105835 1525568 round_trippers.go:580]     Audit-Id: 476abbb0-de3c-4ae3-b932-42c17af3b231
	I1127 23:55:08.105841 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:08.105990 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"307","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1127 23:55:08.603095 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:08.603116 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:08.603127 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:08.603134 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:08.612049 1525568 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1127 23:55:08.612077 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:08.612086 1525568 round_trippers.go:580]     Audit-Id: 64e46aad-7ce9-4b5b-a48d-fc9f68f878b8
	I1127 23:55:08.612093 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:08.612100 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:08.612106 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:08.612112 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:08.612119 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:08 GMT
	I1127 23:55:08.616108 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"402","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1127 23:55:08.616511 1525568 node_ready.go:49] node "multinode-784312" has status "Ready":"True"
	I1127 23:55:08.616567 1525568 node_ready.go:38] duration metric: took 31.539056879s waiting for node "multinode-784312" to be "Ready" ...
	I1127 23:55:08.616577 1525568 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:55:08.616650 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1127 23:55:08.616661 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:08.616669 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:08.616677 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:08.626410 1525568 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1127 23:55:08.626438 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:08.626448 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:08.626456 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:08 GMT
	I1127 23:55:08.626467 1525568 round_trippers.go:580]     Audit-Id: b93956c9-fe8f-46b1-8d5a-ea01a9edb70b
	I1127 23:55:08.626473 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:08.626480 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:08.626486 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:08.628150 1525568 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"410"},"items":[{"metadata":{"name":"coredns-5dd5756b68-n6fjh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"bd970bc6-edbd-4f25-830d-54a301351a7e","resourceVersion":"406","creationTimestamp":"2023-11-27T23:54:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"97d0e2f6-2e20-4f2b-8aca-acc015d0a1b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97d0e2f6-2e20-4f2b-8aca-acc015d0a1b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I1127 23:55:08.632519 1525568 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-n6fjh" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:08.632616 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n6fjh
	I1127 23:55:08.632635 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:08.632648 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:08.632658 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:08.636035 1525568 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:08.636062 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:08.636071 1525568 round_trippers.go:580]     Audit-Id: 26907990-a68a-408c-a129-af6d7a35f6a7
	I1127 23:55:08.636078 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:08.636084 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:08.636091 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:08.636098 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:08.636110 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:08 GMT
	I1127 23:55:08.636642 1525568 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n6fjh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"bd970bc6-edbd-4f25-830d-54a301351a7e","resourceVersion":"406","creationTimestamp":"2023-11-27T23:54:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"97d0e2f6-2e20-4f2b-8aca-acc015d0a1b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97d0e2f6-2e20-4f2b-8aca-acc015d0a1b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1127 23:55:08.637205 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:08.637220 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:08.637228 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:08.637235 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:08.639779 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:08.639799 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:08.639853 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:08.639867 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:08.639874 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:08.639880 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:08 GMT
	I1127 23:55:08.639886 1525568 round_trippers.go:580]     Audit-Id: fc8441fb-89e9-428a-acf3-10e2a5abfed0
	I1127 23:55:08.639896 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:08.640038 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"402","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1127 23:55:08.640500 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n6fjh
	I1127 23:55:08.640514 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:08.640522 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:08.640529 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:08.643351 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:08.643384 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:08.643392 1525568 round_trippers.go:580]     Audit-Id: de3d4a9e-9012-4b80-b1b0-53ff3a944af8
	I1127 23:55:08.643399 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:08.643409 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:08.643419 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:08.643426 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:08.643436 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:08 GMT
	I1127 23:55:08.645960 1525568 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n6fjh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"bd970bc6-edbd-4f25-830d-54a301351a7e","resourceVersion":"406","creationTimestamp":"2023-11-27T23:54:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"97d0e2f6-2e20-4f2b-8aca-acc015d0a1b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97d0e2f6-2e20-4f2b-8aca-acc015d0a1b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1127 23:55:08.646566 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:08.646596 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:08.646608 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:08.646616 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:08.649749 1525568 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:08.649785 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:08.649793 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:08.649800 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:08.649806 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:08.649813 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:08.649821 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:08 GMT
	I1127 23:55:08.649834 1525568 round_trippers.go:580]     Audit-Id: 6dd8c5cd-6991-4b62-bfa5-e31f0403743b
	I1127 23:55:08.651008 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"402","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1127 23:55:09.152647 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n6fjh
	I1127 23:55:09.152681 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:09.152692 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:09.152700 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:09.155526 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:09.155553 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:09.155569 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:09.155577 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:09.155583 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:09 GMT
	I1127 23:55:09.155590 1525568 round_trippers.go:580]     Audit-Id: e8de53fe-0c3a-45f2-b5b4-1d955ade623c
	I1127 23:55:09.155596 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:09.155602 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:09.155699 1525568 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n6fjh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"bd970bc6-edbd-4f25-830d-54a301351a7e","resourceVersion":"406","creationTimestamp":"2023-11-27T23:54:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"97d0e2f6-2e20-4f2b-8aca-acc015d0a1b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97d0e2f6-2e20-4f2b-8aca-acc015d0a1b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1127 23:55:09.156218 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:09.156234 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:09.156243 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:09.156250 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:09.158634 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:09.158702 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:09.158725 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:09.158743 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:09 GMT
	I1127 23:55:09.158756 1525568 round_trippers.go:580]     Audit-Id: fa882175-0ed8-4fb5-9920-1d9633f7a176
	I1127 23:55:09.158778 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:09.158787 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:09.158793 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:09.158928 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"402","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1127 23:55:09.652410 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n6fjh
	I1127 23:55:09.652437 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:09.652446 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:09.652454 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:09.654931 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:09.655037 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:09.655075 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:09.655102 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:09.655120 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:09 GMT
	I1127 23:55:09.655142 1525568 round_trippers.go:580]     Audit-Id: 7e2b4fcf-aa8d-4f54-91b1-8d27b2badcbe
	I1127 23:55:09.655155 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:09.655163 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:09.655279 1525568 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n6fjh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"bd970bc6-edbd-4f25-830d-54a301351a7e","resourceVersion":"418","creationTimestamp":"2023-11-27T23:54:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"97d0e2f6-2e20-4f2b-8aca-acc015d0a1b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97d0e2f6-2e20-4f2b-8aca-acc015d0a1b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1127 23:55:09.655811 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:09.655828 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:09.655837 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:09.655845 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:09.657942 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:09.657963 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:09.657971 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:09.657978 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:09.657984 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:09.657991 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:09 GMT
	I1127 23:55:09.657997 1525568 round_trippers.go:580]     Audit-Id: 94b96d1d-6888-4b03-bac7-53bb2736ad3f
	I1127 23:55:09.658006 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:09.658363 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"402","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1127 23:55:09.658864 1525568 pod_ready.go:92] pod "coredns-5dd5756b68-n6fjh" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:09.658892 1525568 pod_ready.go:81] duration metric: took 1.02634313s waiting for pod "coredns-5dd5756b68-n6fjh" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:09.658903 1525568 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-784312" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:09.658967 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-784312
	I1127 23:55:09.658976 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:09.658985 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:09.658992 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:09.661238 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:09.661262 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:09.661270 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:09.661278 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:09.661291 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:09.661298 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:09.661308 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:09 GMT
	I1127 23:55:09.661315 1525568 round_trippers.go:580]     Audit-Id: 6b04542c-d94e-44cb-b5ae-95b274408b65
	I1127 23:55:09.661437 1525568 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-784312","namespace":"kube-system","uid":"8ccfe057-0978-4cae-8f60-f369839909b8","resourceVersion":"389","creationTimestamp":"2023-11-27T23:54:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"1ddfc2ed4e88470f665ac4b583b77f27","kubernetes.io/config.mirror":"1ddfc2ed4e88470f665ac4b583b77f27","kubernetes.io/config.seen":"2023-11-27T23:54:22.333071846Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1127 23:55:09.661927 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:09.661944 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:09.661952 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:09.661959 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:09.664148 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:09.664167 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:09.664175 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:09.664182 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:09.664188 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:09.664195 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:09 GMT
	I1127 23:55:09.664208 1525568 round_trippers.go:580]     Audit-Id: 6534701b-1a1b-4a1d-bc35-48f191ebf717
	I1127 23:55:09.664216 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:09.664451 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"402","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1127 23:55:09.664827 1525568 pod_ready.go:92] pod "etcd-multinode-784312" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:09.664842 1525568 pod_ready.go:81] duration metric: took 5.92878ms waiting for pod "etcd-multinode-784312" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:09.664856 1525568 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-784312" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:09.664916 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-784312
	I1127 23:55:09.664923 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:09.664931 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:09.664938 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:09.667155 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:09.667214 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:09.667234 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:09.667256 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:09 GMT
	I1127 23:55:09.667283 1525568 round_trippers.go:580]     Audit-Id: 5be884da-177e-4c38-9a59-b137029a4415
	I1127 23:55:09.667331 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:09.667346 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:09.667353 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:09.667483 1525568 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-784312","namespace":"kube-system","uid":"0782da70-b0b0-407b-a075-9c1ae5915c7f","resourceVersion":"390","creationTimestamp":"2023-11-27T23:54:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"e92132bda09962ee19f51deeb131df5e","kubernetes.io/config.mirror":"e92132bda09962ee19f51deeb131df5e","kubernetes.io/config.seen":"2023-11-27T23:54:14.284091555Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1127 23:55:09.668018 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:09.668034 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:09.668042 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:09.668049 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:09.670150 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:09.670202 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:09.670224 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:09.670245 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:09.670278 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:09.670295 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:09 GMT
	I1127 23:55:09.670308 1525568 round_trippers.go:580]     Audit-Id: 3a5fd283-808c-4333-a712-d51d7c81a36c
	I1127 23:55:09.670314 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:09.670415 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"402","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1127 23:55:09.670782 1525568 pod_ready.go:92] pod "kube-apiserver-multinode-784312" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:09.670800 1525568 pod_ready.go:81] duration metric: took 5.933629ms waiting for pod "kube-apiserver-multinode-784312" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:09.670810 1525568 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-784312" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:09.670875 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-784312
	I1127 23:55:09.670885 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:09.670893 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:09.670899 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:09.673189 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:09.673272 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:09.673293 1525568 round_trippers.go:580]     Audit-Id: 88e07cee-add3-4a1c-ab2f-aa4993441abb
	I1127 23:55:09.673324 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:09.673345 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:09.673362 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:09.673383 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:09.673409 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:09 GMT
	I1127 23:55:09.673539 1525568 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-784312","namespace":"kube-system","uid":"50264ad1-dc74-4cf1-86e4-25bc27ed82ec","resourceVersion":"391","creationTimestamp":"2023-11-27T23:54:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7b0782ad15902781f6c2b81516f0f59a","kubernetes.io/config.mirror":"7b0782ad15902781f6c2b81516f0f59a","kubernetes.io/config.seen":"2023-11-27T23:54:22.333077746Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1127 23:55:09.803346 1525568 request.go:629] Waited for 129.278067ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:09.803410 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:09.803415 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:09.803424 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:09.803431 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:09.805914 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:09.806025 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:09.806060 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:09.806099 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:09.806111 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:09.806118 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:09.806128 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:09 GMT
	I1127 23:55:09.806144 1525568 round_trippers.go:580]     Audit-Id: 336567ea-89c5-428f-a863-af83900ac5d0
	I1127 23:55:09.806269 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"402","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1127 23:55:09.806698 1525568 pod_ready.go:92] pod "kube-controller-manager-multinode-784312" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:09.806714 1525568 pod_ready.go:81] duration metric: took 135.888163ms waiting for pod "kube-controller-manager-multinode-784312" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:09.806726 1525568 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7vspj" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:10.003305 1525568 request.go:629] Waited for 196.45537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7vspj
	I1127 23:55:10.003437 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7vspj
	I1127 23:55:10.003473 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:10.003500 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:10.003518 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:10.009684 1525568 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1127 23:55:10.009768 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:10.009795 1525568 round_trippers.go:580]     Audit-Id: 51256e88-5eac-422a-ad58-1b3992481cb3
	I1127 23:55:10.009813 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:10.009846 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:10.009905 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:10.009933 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:10.009951 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:10 GMT
	I1127 23:55:10.010161 1525568 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7vspj","generateName":"kube-proxy-","namespace":"kube-system","uid":"eeecedf5-ddd9-4647-b567-36b194cb229b","resourceVersion":"385","creationTimestamp":"2023-11-27T23:54:36Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a6ed3fcf-f10c-4b5e-a68c-dc005d2513e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6ed3fcf-f10c-4b5e-a68c-dc005d2513e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1127 23:55:10.204215 1525568 request.go:629] Waited for 193.460995ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:10.204293 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:10.204302 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:10.204314 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:10.204323 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:10.206893 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:10.206926 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:10.206935 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:10.206942 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:10.206949 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:10.206955 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:10 GMT
	I1127 23:55:10.206961 1525568 round_trippers.go:580]     Audit-Id: 4dd878b0-92a3-483b-bcc6-f90142ff19aa
	I1127 23:55:10.206968 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:10.207152 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"402","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1127 23:55:10.207559 1525568 pod_ready.go:92] pod "kube-proxy-7vspj" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:10.207578 1525568 pod_ready.go:81] duration metric: took 400.84298ms waiting for pod "kube-proxy-7vspj" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:10.207589 1525568 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-784312" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:10.404017 1525568 request.go:629] Waited for 196.338514ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-784312
	I1127 23:55:10.404085 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-784312
	I1127 23:55:10.404095 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:10.404110 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:10.404117 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:10.406879 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:10.406955 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:10.406988 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:10.407003 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:10.407010 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:10 GMT
	I1127 23:55:10.407016 1525568 round_trippers.go:580]     Audit-Id: d20f621e-51b2-4ecb-833d-f2dfa5a09d8c
	I1127 23:55:10.407025 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:10.407041 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:10.407184 1525568 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-784312","namespace":"kube-system","uid":"540bbd67-2910-425c-999b-69f4ec74bc2c","resourceVersion":"392","creationTimestamp":"2023-11-27T23:54:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ab5d51a61ac97955ebf99588ca9d0290","kubernetes.io/config.mirror":"ab5d51a61ac97955ebf99588ca9d0290","kubernetes.io/config.seen":"2023-11-27T23:54:22.333078697Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1127 23:55:10.603990 1525568 request.go:629] Waited for 196.354227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:10.604060 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:10.604070 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:10.604079 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:10.604086 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:10.606622 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:10.606646 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:10.606654 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:10.606661 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:10 GMT
	I1127 23:55:10.606695 1525568 round_trippers.go:580]     Audit-Id: 26075708-8ff8-428e-8488-80b28c22ecf0
	I1127 23:55:10.606707 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:10.606716 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:10.606727 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:10.606844 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"402","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1127 23:55:10.607247 1525568 pod_ready.go:92] pod "kube-scheduler-multinode-784312" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:10.607267 1525568 pod_ready.go:81] duration metric: took 399.667739ms waiting for pod "kube-scheduler-multinode-784312" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:10.607281 1525568 pod_ready.go:38] duration metric: took 1.990688781s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:55:10.607300 1525568 api_server.go:52] waiting for apiserver process to appear ...
	I1127 23:55:10.607371 1525568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 23:55:10.619369 1525568 command_runner.go:130] > 1277
	I1127 23:55:10.620815 1525568 api_server.go:72] duration metric: took 33.708234905s to wait for apiserver process to appear ...
	I1127 23:55:10.620839 1525568 api_server.go:88] waiting for apiserver healthz status ...
	I1127 23:55:10.620856 1525568 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1127 23:55:10.629628 1525568 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1127 23:55:10.629702 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1127 23:55:10.629712 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:10.629721 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:10.629728 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:10.630945 1525568 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:55:10.630963 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:10.630971 1525568 round_trippers.go:580]     Content-Length: 264
	I1127 23:55:10.630977 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:10 GMT
	I1127 23:55:10.630992 1525568 round_trippers.go:580]     Audit-Id: 95db44dd-65db-495b-bf2f-973264a27569
	I1127 23:55:10.631014 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:10.631025 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:10.631031 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:10.631039 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:10.631067 1525568 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I1127 23:55:10.631164 1525568 api_server.go:141] control plane version: v1.28.4
	I1127 23:55:10.631184 1525568 api_server.go:131] duration metric: took 10.33851ms to wait for apiserver health ...
	I1127 23:55:10.631193 1525568 system_pods.go:43] waiting for kube-system pods to appear ...
	I1127 23:55:10.803521 1525568 request.go:629] Waited for 172.260737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1127 23:55:10.803666 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1127 23:55:10.803680 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:10.803689 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:10.803696 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:10.807082 1525568 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:10.807105 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:10.807114 1525568 round_trippers.go:580]     Audit-Id: 7559e6f3-8658-428d-a16b-a7fb078b3277
	I1127 23:55:10.807121 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:10.807127 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:10.807133 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:10.807140 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:10.807151 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:10 GMT
	I1127 23:55:10.808028 1525568 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"424"},"items":[{"metadata":{"name":"coredns-5dd5756b68-n6fjh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"bd970bc6-edbd-4f25-830d-54a301351a7e","resourceVersion":"418","creationTimestamp":"2023-11-27T23:54:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"97d0e2f6-2e20-4f2b-8aca-acc015d0a1b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97d0e2f6-2e20-4f2b-8aca-acc015d0a1b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1127 23:55:10.810743 1525568 system_pods.go:59] 8 kube-system pods found
	I1127 23:55:10.810781 1525568 system_pods.go:61] "coredns-5dd5756b68-n6fjh" [bd970bc6-edbd-4f25-830d-54a301351a7e] Running
	I1127 23:55:10.810788 1525568 system_pods.go:61] "etcd-multinode-784312" [8ccfe057-0978-4cae-8f60-f369839909b8] Running
	I1127 23:55:10.810794 1525568 system_pods.go:61] "kindnet-hwrdz" [068cf2a8-3b1a-431c-9cc5-2f290d6755cd] Running
	I1127 23:55:10.810801 1525568 system_pods.go:61] "kube-apiserver-multinode-784312" [0782da70-b0b0-407b-a075-9c1ae5915c7f] Running
	I1127 23:55:10.810814 1525568 system_pods.go:61] "kube-controller-manager-multinode-784312" [50264ad1-dc74-4cf1-86e4-25bc27ed82ec] Running
	I1127 23:55:10.810823 1525568 system_pods.go:61] "kube-proxy-7vspj" [eeecedf5-ddd9-4647-b567-36b194cb229b] Running
	I1127 23:55:10.810828 1525568 system_pods.go:61] "kube-scheduler-multinode-784312" [540bbd67-2910-425c-999b-69f4ec74bc2c] Running
	I1127 23:55:10.810839 1525568 system_pods.go:61] "storage-provisioner" [712aa9f0-276e-458d-9783-9a05ee6dfb39] Running
	I1127 23:55:10.810845 1525568 system_pods.go:74] duration metric: took 179.644052ms to wait for pod list to return data ...
	I1127 23:55:10.810856 1525568 default_sa.go:34] waiting for default service account to be created ...
	I1127 23:55:11.003576 1525568 request.go:629] Waited for 192.628896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1127 23:55:11.003658 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1127 23:55:11.003665 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:11.003674 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:11.003682 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:11.012920 1525568 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1127 23:55:11.012947 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:11.012956 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:11.012962 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:11.012968 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:11.012974 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:11.012981 1525568 round_trippers.go:580]     Content-Length: 261
	I1127 23:55:11.012987 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:11 GMT
	I1127 23:55:11.012993 1525568 round_trippers.go:580]     Audit-Id: 923d9fb7-486d-4bd4-b6bf-d8dba98c5027
	I1127 23:55:11.013013 1525568 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"7a1b7286-fe84-44e1-8102-34f7fd770064","resourceVersion":"320","creationTimestamp":"2023-11-27T23:54:36Z"}}]}
	I1127 23:55:11.013213 1525568 default_sa.go:45] found service account: "default"
	I1127 23:55:11.013235 1525568 default_sa.go:55] duration metric: took 202.368487ms for default service account to be created ...
	I1127 23:55:11.013244 1525568 system_pods.go:116] waiting for k8s-apps to be running ...
	I1127 23:55:11.203702 1525568 request.go:629] Waited for 190.382575ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1127 23:55:11.203803 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1127 23:55:11.203813 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:11.203823 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:11.203838 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:11.207560 1525568 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:11.207584 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:11.207593 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:11.207600 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:11 GMT
	I1127 23:55:11.207607 1525568 round_trippers.go:580]     Audit-Id: 7a4cef1e-f7bf-4a62-a9fe-cd42441f6a16
	I1127 23:55:11.207613 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:11.207619 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:11.207625 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:11.208113 1525568 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"coredns-5dd5756b68-n6fjh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"bd970bc6-edbd-4f25-830d-54a301351a7e","resourceVersion":"418","creationTimestamp":"2023-11-27T23:54:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"97d0e2f6-2e20-4f2b-8aca-acc015d0a1b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97d0e2f6-2e20-4f2b-8aca-acc015d0a1b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1127 23:55:11.210587 1525568 system_pods.go:86] 8 kube-system pods found
	I1127 23:55:11.210615 1525568 system_pods.go:89] "coredns-5dd5756b68-n6fjh" [bd970bc6-edbd-4f25-830d-54a301351a7e] Running
	I1127 23:55:11.210624 1525568 system_pods.go:89] "etcd-multinode-784312" [8ccfe057-0978-4cae-8f60-f369839909b8] Running
	I1127 23:55:11.210629 1525568 system_pods.go:89] "kindnet-hwrdz" [068cf2a8-3b1a-431c-9cc5-2f290d6755cd] Running
	I1127 23:55:11.210636 1525568 system_pods.go:89] "kube-apiserver-multinode-784312" [0782da70-b0b0-407b-a075-9c1ae5915c7f] Running
	I1127 23:55:11.210642 1525568 system_pods.go:89] "kube-controller-manager-multinode-784312" [50264ad1-dc74-4cf1-86e4-25bc27ed82ec] Running
	I1127 23:55:11.210674 1525568 system_pods.go:89] "kube-proxy-7vspj" [eeecedf5-ddd9-4647-b567-36b194cb229b] Running
	I1127 23:55:11.210685 1525568 system_pods.go:89] "kube-scheduler-multinode-784312" [540bbd67-2910-425c-999b-69f4ec74bc2c] Running
	I1127 23:55:11.210690 1525568 system_pods.go:89] "storage-provisioner" [712aa9f0-276e-458d-9783-9a05ee6dfb39] Running
	I1127 23:55:11.210697 1525568 system_pods.go:126] duration metric: took 197.448206ms to wait for k8s-apps to be running ...
	I1127 23:55:11.210708 1525568 system_svc.go:44] waiting for kubelet service to be running ....
	I1127 23:55:11.210772 1525568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:55:11.224703 1525568 system_svc.go:56] duration metric: took 13.986164ms WaitForService to wait for kubelet.
	I1127 23:55:11.224728 1525568 kubeadm.go:581] duration metric: took 34.312152549s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1127 23:55:11.224747 1525568 node_conditions.go:102] verifying NodePressure condition ...
	I1127 23:55:11.404152 1525568 request.go:629] Waited for 179.332883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1127 23:55:11.404216 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1127 23:55:11.404226 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:11.404237 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:11.404247 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:11.406838 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:11.406904 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:11.406942 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:11.406961 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:11.406985 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:11 GMT
	I1127 23:55:11.406994 1525568 round_trippers.go:580]     Audit-Id: 188d20df-baa8-4ad7-a6a9-3f758ebe0847
	I1127 23:55:11.407001 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:11.407007 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:11.407146 1525568 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"402","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I1127 23:55:11.407595 1525568 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1127 23:55:11.407618 1525568 node_conditions.go:123] node cpu capacity is 2
	I1127 23:55:11.407628 1525568 node_conditions.go:105] duration metric: took 182.875956ms to run NodePressure ...
	I1127 23:55:11.407638 1525568 start.go:228] waiting for startup goroutines ...
	I1127 23:55:11.407644 1525568 start.go:233] waiting for cluster config update ...
	I1127 23:55:11.407654 1525568 start.go:242] writing updated cluster config ...
	I1127 23:55:11.410221 1525568 out.go:177] 
	I1127 23:55:11.412071 1525568 config.go:182] Loaded profile config "multinode-784312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:55:11.412170 1525568 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/config.json ...
	I1127 23:55:11.414619 1525568 out.go:177] * Starting worker node multinode-784312-m02 in cluster multinode-784312
	I1127 23:55:11.416586 1525568 cache.go:121] Beginning downloading kic base image for docker with crio
	I1127 23:55:11.418672 1525568 out.go:177] * Pulling base image ...
	I1127 23:55:11.420876 1525568 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:55:11.420891 1525568 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 23:55:11.420912 1525568 cache.go:56] Caching tarball of preloaded images
	I1127 23:55:11.421015 1525568 preload.go:174] Found /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1127 23:55:11.421026 1525568 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1127 23:55:11.421127 1525568 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/config.json ...
	I1127 23:55:11.438557 1525568 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1127 23:55:11.438583 1525568 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1127 23:55:11.438604 1525568 cache.go:194] Successfully downloaded all kic artifacts
	I1127 23:55:11.438635 1525568 start.go:365] acquiring machines lock for multinode-784312-m02: {Name:mkcd62b6c6888ee8821ec52cf6c184c3ef21e0a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:55:11.438755 1525568 start.go:369] acquired machines lock for "multinode-784312-m02" in 94.769µs
	I1127 23:55:11.438785 1525568 start.go:93] Provisioning new machine with config: &{Name:multinode-784312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-784312 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1127 23:55:11.438862 1525568 start.go:125] createHost starting for "m02" (driver="docker")
	I1127 23:55:11.442408 1525568 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1127 23:55:11.442541 1525568 start.go:159] libmachine.API.Create for "multinode-784312" (driver="docker")
	I1127 23:55:11.442574 1525568 client.go:168] LocalClient.Create starting
	I1127 23:55:11.442642 1525568 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem
	I1127 23:55:11.442681 1525568 main.go:141] libmachine: Decoding PEM data...
	I1127 23:55:11.442701 1525568 main.go:141] libmachine: Parsing certificate...
	I1127 23:55:11.442760 1525568 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/cert.pem
	I1127 23:55:11.442787 1525568 main.go:141] libmachine: Decoding PEM data...
	I1127 23:55:11.442802 1525568 main.go:141] libmachine: Parsing certificate...
	I1127 23:55:11.443058 1525568 cli_runner.go:164] Run: docker network inspect multinode-784312 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 23:55:11.460611 1525568 network_create.go:77] Found existing network {name:multinode-784312 subnet:0x4003460ae0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1127 23:55:11.460656 1525568 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-784312-m02" container
	I1127 23:55:11.460728 1525568 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1127 23:55:11.478149 1525568 cli_runner.go:164] Run: docker volume create multinode-784312-m02 --label name.minikube.sigs.k8s.io=multinode-784312-m02 --label created_by.minikube.sigs.k8s.io=true
	I1127 23:55:11.496686 1525568 oci.go:103] Successfully created a docker volume multinode-784312-m02
	I1127 23:55:11.496778 1525568 cli_runner.go:164] Run: docker run --rm --name multinode-784312-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-784312-m02 --entrypoint /usr/bin/test -v multinode-784312-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1127 23:55:12.117795 1525568 oci.go:107] Successfully prepared a docker volume multinode-784312-m02
	I1127 23:55:12.117843 1525568 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:55:12.117918 1525568 kic.go:194] Starting extracting preloaded images to volume ...
	I1127 23:55:12.118006 1525568 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-784312-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1127 23:55:16.540878 1525568 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-784312-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir: (4.422828393s)
	I1127 23:55:16.540911 1525568 kic.go:203] duration metric: took 4.422990 seconds to extract preloaded images to volume
	W1127 23:55:16.541064 1525568 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1127 23:55:16.541179 1525568 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1127 23:55:16.618414 1525568 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-784312-m02 --name multinode-784312-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-784312-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-784312-m02 --network multinode-784312 --ip 192.168.58.3 --volume multinode-784312-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1127 23:55:16.985709 1525568 cli_runner.go:164] Run: docker container inspect multinode-784312-m02 --format={{.State.Running}}
	I1127 23:55:17.010043 1525568 cli_runner.go:164] Run: docker container inspect multinode-784312-m02 --format={{.State.Status}}
	I1127 23:55:17.038889 1525568 cli_runner.go:164] Run: docker exec multinode-784312-m02 stat /var/lib/dpkg/alternatives/iptables
	I1127 23:55:17.100608 1525568 oci.go:144] the created container "multinode-784312-m02" has a running status.
	I1127 23:55:17.100636 1525568 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/multinode-784312-m02/id_rsa...
	I1127 23:55:17.359339 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/multinode-784312-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1127 23:55:17.359437 1525568 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/multinode-784312-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1127 23:55:17.385230 1525568 cli_runner.go:164] Run: docker container inspect multinode-784312-m02 --format={{.State.Status}}
	I1127 23:55:17.411925 1525568 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1127 23:55:17.411990 1525568 kic_runner.go:114] Args: [docker exec --privileged multinode-784312-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1127 23:55:17.491427 1525568 cli_runner.go:164] Run: docker container inspect multinode-784312-m02 --format={{.State.Status}}
	I1127 23:55:17.521971 1525568 machine.go:88] provisioning docker machine ...
	I1127 23:55:17.522007 1525568 ubuntu.go:169] provisioning hostname "multinode-784312-m02"
	I1127 23:55:17.522080 1525568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-784312-m02
	I1127 23:55:17.553640 1525568 main.go:141] libmachine: Using SSH client type: native
	I1127 23:55:17.554372 1525568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34149 <nil> <nil>}
	I1127 23:55:17.554396 1525568 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-784312-m02 && echo "multinode-784312-m02" | sudo tee /etc/hostname
	I1127 23:55:17.555830 1525568 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1127 23:55:20.700679 1525568 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-784312-m02
	
	I1127 23:55:20.700764 1525568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-784312-m02
	I1127 23:55:20.719498 1525568 main.go:141] libmachine: Using SSH client type: native
	I1127 23:55:20.719922 1525568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34149 <nil> <nil>}
	I1127 23:55:20.719946 1525568 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-784312-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-784312-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-784312-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1127 23:55:20.855606 1525568 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 23:55:20.855635 1525568 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17206-1455288/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-1455288/.minikube}
	I1127 23:55:20.855659 1525568 ubuntu.go:177] setting up certificates
	I1127 23:55:20.855668 1525568 provision.go:83] configureAuth start
	I1127 23:55:20.855741 1525568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-784312-m02
	I1127 23:55:20.875559 1525568 provision.go:138] copyHostCerts
	I1127 23:55:20.875608 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.pem
	I1127 23:55:20.875639 1525568 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.pem, removing ...
	I1127 23:55:20.875651 1525568 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.pem
	I1127 23:55:20.875727 1525568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.pem (1078 bytes)
	I1127 23:55:20.875815 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17206-1455288/.minikube/cert.pem
	I1127 23:55:20.875837 1525568 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-1455288/.minikube/cert.pem, removing ...
	I1127 23:55:20.875846 1525568 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-1455288/.minikube/cert.pem
	I1127 23:55:20.875874 1525568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-1455288/.minikube/cert.pem (1123 bytes)
	I1127 23:55:20.876013 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17206-1455288/.minikube/key.pem
	I1127 23:55:20.876054 1525568 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-1455288/.minikube/key.pem, removing ...
	I1127 23:55:20.876064 1525568 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-1455288/.minikube/key.pem
	I1127 23:55:20.876099 1525568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-1455288/.minikube/key.pem (1679 bytes)
	I1127 23:55:20.876156 1525568 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca-key.pem org=jenkins.multinode-784312-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-784312-m02]
	I1127 23:55:21.139152 1525568 provision.go:172] copyRemoteCerts
	I1127 23:55:21.139260 1525568 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1127 23:55:21.139336 1525568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-784312-m02
	I1127 23:55:21.158363 1525568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34149 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/multinode-784312-m02/id_rsa Username:docker}
	I1127 23:55:21.257205 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1127 23:55:21.257276 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1127 23:55:21.286586 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1127 23:55:21.286648 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1127 23:55:21.316171 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1127 23:55:21.316243 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1127 23:55:21.347326 1525568 provision.go:86] duration metric: configureAuth took 491.641435ms
	I1127 23:55:21.347353 1525568 ubuntu.go:193] setting minikube options for container-runtime
	I1127 23:55:21.347558 1525568 config.go:182] Loaded profile config "multinode-784312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:55:21.347662 1525568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-784312-m02
	I1127 23:55:21.366208 1525568 main.go:141] libmachine: Using SSH client type: native
	I1127 23:55:21.366616 1525568 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34149 <nil> <nil>}
	I1127 23:55:21.366636 1525568 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1127 23:55:21.624842 1525568 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1127 23:55:21.624871 1525568 machine.go:91] provisioned docker machine in 4.10287684s
	I1127 23:55:21.624881 1525568 client.go:171] LocalClient.Create took 10.182297639s
	I1127 23:55:21.624893 1525568 start.go:167] duration metric: libmachine.API.Create for "multinode-784312" took 10.1823593s
	I1127 23:55:21.624900 1525568 start.go:300] post-start starting for "multinode-784312-m02" (driver="docker")
	I1127 23:55:21.624910 1525568 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1127 23:55:21.624968 1525568 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1127 23:55:21.625018 1525568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-784312-m02
	I1127 23:55:21.643797 1525568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34149 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/multinode-784312-m02/id_rsa Username:docker}
	I1127 23:55:21.741136 1525568 ssh_runner.go:195] Run: cat /etc/os-release
	I1127 23:55:21.745697 1525568 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1127 23:55:21.745715 1525568 command_runner.go:130] > NAME="Ubuntu"
	I1127 23:55:21.745723 1525568 command_runner.go:130] > VERSION_ID="22.04"
	I1127 23:55:21.745730 1525568 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1127 23:55:21.745736 1525568 command_runner.go:130] > VERSION_CODENAME=jammy
	I1127 23:55:21.745740 1525568 command_runner.go:130] > ID=ubuntu
	I1127 23:55:21.745764 1525568 command_runner.go:130] > ID_LIKE=debian
	I1127 23:55:21.745775 1525568 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1127 23:55:21.745781 1525568 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1127 23:55:21.745791 1525568 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1127 23:55:21.745803 1525568 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1127 23:55:21.745814 1525568 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1127 23:55:21.745911 1525568 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1127 23:55:21.745955 1525568 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1127 23:55:21.745974 1525568 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1127 23:55:21.745982 1525568 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1127 23:55:21.745997 1525568 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-1455288/.minikube/addons for local assets ...
	I1127 23:55:21.746062 1525568 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-1455288/.minikube/files for local assets ...
	I1127 23:55:21.746146 1525568 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/14606522.pem -> 14606522.pem in /etc/ssl/certs
	I1127 23:55:21.746157 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/14606522.pem -> /etc/ssl/certs/14606522.pem
	I1127 23:55:21.746263 1525568 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1127 23:55:21.756941 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/14606522.pem --> /etc/ssl/certs/14606522.pem (1708 bytes)
	I1127 23:55:21.785921 1525568 start.go:303] post-start completed in 161.00506ms
	I1127 23:55:21.786285 1525568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-784312-m02
	I1127 23:55:21.804477 1525568 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/config.json ...
	I1127 23:55:21.804759 1525568 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 23:55:21.804812 1525568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-784312-m02
	I1127 23:55:21.824453 1525568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34149 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/multinode-784312-m02/id_rsa Username:docker}
	I1127 23:55:21.916194 1525568 command_runner.go:130] > 18%!
	(MISSING)I1127 23:55:21.916348 1525568 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1127 23:55:21.922259 1525568 command_runner.go:130] > 160G
	I1127 23:55:21.922703 1525568 start.go:128] duration metric: createHost completed in 10.483826108s
	I1127 23:55:21.922721 1525568 start.go:83] releasing machines lock for "multinode-784312-m02", held for 10.483952588s
	I1127 23:55:21.922866 1525568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-784312-m02
	I1127 23:55:21.944972 1525568 out.go:177] * Found network options:
	I1127 23:55:21.946971 1525568 out.go:177]   - NO_PROXY=192.168.58.2
	W1127 23:55:21.948728 1525568 proxy.go:119] fail to check proxy env: Error ip not in block
	W1127 23:55:21.948774 1525568 proxy.go:119] fail to check proxy env: Error ip not in block
	I1127 23:55:21.948846 1525568 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1127 23:55:21.948893 1525568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-784312-m02
	I1127 23:55:21.949153 1525568 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1127 23:55:21.949209 1525568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-784312-m02
	I1127 23:55:21.972304 1525568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34149 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/multinode-784312-m02/id_rsa Username:docker}
	I1127 23:55:21.973583 1525568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34149 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/multinode-784312-m02/id_rsa Username:docker}
	I1127 23:55:22.203536 1525568 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1127 23:55:22.239498 1525568 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1127 23:55:22.245032 1525568 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1127 23:55:22.245059 1525568 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1127 23:55:22.245067 1525568 command_runner.go:130] > Device: b3h/179d	Inode: 5708571     Links: 1
	I1127 23:55:22.245075 1525568 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1127 23:55:22.245082 1525568 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1127 23:55:22.245089 1525568 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1127 23:55:22.245109 1525568 command_runner.go:130] > Change: 2023-11-27 23:30:31.978009364 +0000
	I1127 23:55:22.245121 1525568 command_runner.go:130] >  Birth: 2023-11-27 23:30:31.978009364 +0000
	I1127 23:55:22.245448 1525568 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 23:55:22.269208 1525568 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1127 23:55:22.269284 1525568 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 23:55:22.308528 1525568 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1127 23:55:22.308557 1525568 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1127 23:55:22.308565 1525568 start.go:472] detecting cgroup driver to use...
	I1127 23:55:22.308598 1525568 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1127 23:55:22.308653 1525568 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1127 23:55:22.328868 1525568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1127 23:55:22.342717 1525568 docker.go:203] disabling cri-docker service (if available) ...
	I1127 23:55:22.342780 1525568 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1127 23:55:22.361690 1525568 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1127 23:55:22.377905 1525568 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1127 23:55:22.483669 1525568 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1127 23:55:22.603276 1525568 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1127 23:55:22.603352 1525568 docker.go:219] disabling docker service ...
	I1127 23:55:22.603428 1525568 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1127 23:55:22.626369 1525568 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1127 23:55:22.642547 1525568 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1127 23:55:22.754086 1525568 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1127 23:55:22.754190 1525568 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1127 23:55:22.767841 1525568 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1127 23:55:22.877012 1525568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1127 23:55:22.892147 1525568 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 23:55:22.911125 1525568 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1127 23:55:22.912625 1525568 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1127 23:55:22.912700 1525568 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:55:22.928093 1525568 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1127 23:55:22.928169 1525568 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:55:22.940269 1525568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:55:22.952232 1525568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:55:22.964487 1525568 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1127 23:55:22.976108 1525568 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1127 23:55:22.985676 1525568 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1127 23:55:22.986966 1525568 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1127 23:55:22.997421 1525568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 23:55:23.100683 1525568 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1127 23:55:23.213883 1525568 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1127 23:55:23.213954 1525568 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1127 23:55:23.218627 1525568 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1127 23:55:23.218651 1525568 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1127 23:55:23.218660 1525568 command_runner.go:130] > Device: bdh/189d	Inode: 190         Links: 1
	I1127 23:55:23.218669 1525568 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1127 23:55:23.218676 1525568 command_runner.go:130] > Access: 2023-11-27 23:55:23.198830582 +0000
	I1127 23:55:23.218684 1525568 command_runner.go:130] > Modify: 2023-11-27 23:55:23.198830582 +0000
	I1127 23:55:23.218694 1525568 command_runner.go:130] > Change: 2023-11-27 23:55:23.198830582 +0000
	I1127 23:55:23.218699 1525568 command_runner.go:130] >  Birth: -
	I1127 23:55:23.218711 1525568 start.go:540] Will wait 60s for crictl version
	I1127 23:55:23.218766 1525568 ssh_runner.go:195] Run: which crictl
	I1127 23:55:23.223253 1525568 command_runner.go:130] > /usr/bin/crictl
	I1127 23:55:23.223708 1525568 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1127 23:55:23.264685 1525568 command_runner.go:130] > Version:  0.1.0
	I1127 23:55:23.264904 1525568 command_runner.go:130] > RuntimeName:  cri-o
	I1127 23:55:23.265095 1525568 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1127 23:55:23.265269 1525568 command_runner.go:130] > RuntimeApiVersion:  v1
	I1127 23:55:23.268276 1525568 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1127 23:55:23.268361 1525568 ssh_runner.go:195] Run: crio --version
	I1127 23:55:23.314508 1525568 command_runner.go:130] > crio version 1.24.6
	I1127 23:55:23.314532 1525568 command_runner.go:130] > Version:          1.24.6
	I1127 23:55:23.314542 1525568 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1127 23:55:23.314551 1525568 command_runner.go:130] > GitTreeState:     clean
	I1127 23:55:23.314558 1525568 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1127 23:55:23.314564 1525568 command_runner.go:130] > GoVersion:        go1.18.2
	I1127 23:55:23.314569 1525568 command_runner.go:130] > Compiler:         gc
	I1127 23:55:23.314575 1525568 command_runner.go:130] > Platform:         linux/arm64
	I1127 23:55:23.314585 1525568 command_runner.go:130] > Linkmode:         dynamic
	I1127 23:55:23.314597 1525568 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1127 23:55:23.314606 1525568 command_runner.go:130] > SeccompEnabled:   true
	I1127 23:55:23.314611 1525568 command_runner.go:130] > AppArmorEnabled:  false
	I1127 23:55:23.316852 1525568 ssh_runner.go:195] Run: crio --version
	I1127 23:55:23.358812 1525568 command_runner.go:130] > crio version 1.24.6
	I1127 23:55:23.358832 1525568 command_runner.go:130] > Version:          1.24.6
	I1127 23:55:23.358841 1525568 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1127 23:55:23.358846 1525568 command_runner.go:130] > GitTreeState:     clean
	I1127 23:55:23.358854 1525568 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1127 23:55:23.358860 1525568 command_runner.go:130] > GoVersion:        go1.18.2
	I1127 23:55:23.358865 1525568 command_runner.go:130] > Compiler:         gc
	I1127 23:55:23.358871 1525568 command_runner.go:130] > Platform:         linux/arm64
	I1127 23:55:23.358878 1525568 command_runner.go:130] > Linkmode:         dynamic
	I1127 23:55:23.358892 1525568 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1127 23:55:23.358914 1525568 command_runner.go:130] > SeccompEnabled:   true
	I1127 23:55:23.358920 1525568 command_runner.go:130] > AppArmorEnabled:  false
	I1127 23:55:23.363101 1525568 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1127 23:55:23.364788 1525568 out.go:177]   - env NO_PROXY=192.168.58.2
	I1127 23:55:23.366487 1525568 cli_runner.go:164] Run: docker network inspect multinode-784312 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 23:55:23.384232 1525568 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1127 23:55:23.388925 1525568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:55:23.407550 1525568 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312 for IP: 192.168.58.3
	I1127 23:55:23.407582 1525568 certs.go:190] acquiring lock for shared ca certs: {Name:mk268ef230412b241734813f303d69d9b36c42ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:55:23.407729 1525568 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.key
	I1127 23:55:23.407772 1525568 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-1455288/.minikube/proxy-client-ca.key
	I1127 23:55:23.407785 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1127 23:55:23.407804 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1127 23:55:23.407821 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1127 23:55:23.407836 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1127 23:55:23.407892 1525568 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/1460652.pem (1338 bytes)
	W1127 23:55:23.407927 1525568 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/1460652_empty.pem, impossibly tiny 0 bytes
	I1127 23:55:23.407940 1525568 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca-key.pem (1679 bytes)
	I1127 23:55:23.407970 1525568 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem (1078 bytes)
	I1127 23:55:23.407999 1525568 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/cert.pem (1123 bytes)
	I1127 23:55:23.408025 1525568 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/key.pem (1679 bytes)
	I1127 23:55:23.408077 1525568 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/14606522.pem (1708 bytes)
	I1127 23:55:23.408109 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/1460652.pem -> /usr/share/ca-certificates/1460652.pem
	I1127 23:55:23.408123 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/14606522.pem -> /usr/share/ca-certificates/14606522.pem
	I1127 23:55:23.408138 1525568 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:55:23.408466 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1127 23:55:23.437111 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1127 23:55:23.465550 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1127 23:55:23.494693 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1127 23:55:23.523698 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/1460652.pem --> /usr/share/ca-certificates/1460652.pem (1338 bytes)
	I1127 23:55:23.553655 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/14606522.pem --> /usr/share/ca-certificates/14606522.pem (1708 bytes)
	I1127 23:55:23.582778 1525568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1127 23:55:23.612638 1525568 ssh_runner.go:195] Run: openssl version
	I1127 23:55:23.619297 1525568 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1127 23:55:23.619656 1525568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14606522.pem && ln -fs /usr/share/ca-certificates/14606522.pem /etc/ssl/certs/14606522.pem"
	I1127 23:55:23.631322 1525568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14606522.pem
	I1127 23:55:23.635853 1525568 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 27 23:38 /usr/share/ca-certificates/14606522.pem
	I1127 23:55:23.635995 1525568 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:38 /usr/share/ca-certificates/14606522.pem
	I1127 23:55:23.636069 1525568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14606522.pem
	I1127 23:55:23.644168 1525568 command_runner.go:130] > 3ec20f2e
	I1127 23:55:23.644605 1525568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14606522.pem /etc/ssl/certs/3ec20f2e.0"
	I1127 23:55:23.656150 1525568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1127 23:55:23.667552 1525568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:55:23.672097 1525568 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 27 23:31 /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:55:23.672379 1525568 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:31 /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:55:23.672464 1525568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:55:23.680697 1525568 command_runner.go:130] > b5213941
	I1127 23:55:23.681225 1525568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1127 23:55:23.693456 1525568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1460652.pem && ln -fs /usr/share/ca-certificates/1460652.pem /etc/ssl/certs/1460652.pem"
	I1127 23:55:23.705334 1525568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1460652.pem
	I1127 23:55:23.709979 1525568 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 27 23:38 /usr/share/ca-certificates/1460652.pem
	I1127 23:55:23.710149 1525568 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:38 /usr/share/ca-certificates/1460652.pem
	I1127 23:55:23.710205 1525568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1460652.pem
	I1127 23:55:23.718318 1525568 command_runner.go:130] > 51391683
	I1127 23:55:23.718725 1525568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1460652.pem /etc/ssl/certs/51391683.0"
	I1127 23:55:23.730646 1525568 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1127 23:55:23.735193 1525568 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 23:55:23.735272 1525568 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 23:55:23.735390 1525568 ssh_runner.go:195] Run: crio config
	I1127 23:55:23.783626 1525568 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1127 23:55:23.783656 1525568 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1127 23:55:23.783665 1525568 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1127 23:55:23.783688 1525568 command_runner.go:130] > #
	I1127 23:55:23.783700 1525568 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1127 23:55:23.783715 1525568 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1127 23:55:23.783723 1525568 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1127 23:55:23.783734 1525568 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1127 23:55:23.783739 1525568 command_runner.go:130] > # reload'.
	I1127 23:55:23.783762 1525568 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1127 23:55:23.783781 1525568 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1127 23:55:23.783797 1525568 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1127 23:55:23.783805 1525568 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1127 23:55:23.783813 1525568 command_runner.go:130] > [crio]
	I1127 23:55:23.783820 1525568 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1127 23:55:23.783827 1525568 command_runner.go:130] > # containers images, in this directory.
	I1127 23:55:23.783840 1525568 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1127 23:55:23.783858 1525568 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1127 23:55:23.783871 1525568 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1127 23:55:23.783879 1525568 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1127 23:55:23.783887 1525568 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1127 23:55:23.783893 1525568 command_runner.go:130] > # storage_driver = "vfs"
	I1127 23:55:23.783914 1525568 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1127 23:55:23.783921 1525568 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1127 23:55:23.783926 1525568 command_runner.go:130] > # storage_option = [
	I1127 23:55:23.783931 1525568 command_runner.go:130] > # ]
	I1127 23:55:23.783939 1525568 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1127 23:55:23.783951 1525568 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1127 23:55:23.783957 1525568 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1127 23:55:23.783964 1525568 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1127 23:55:23.783980 1525568 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1127 23:55:23.783991 1525568 command_runner.go:130] > # always happen on a node reboot
	I1127 23:55:23.783998 1525568 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1127 23:55:23.784005 1525568 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1127 23:55:23.784015 1525568 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1127 23:55:23.784025 1525568 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1127 23:55:23.784034 1525568 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1127 23:55:23.784045 1525568 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1127 23:55:23.784070 1525568 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1127 23:55:23.784087 1525568 command_runner.go:130] > # internal_wipe = true
	I1127 23:55:23.784099 1525568 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1127 23:55:23.784108 1525568 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1127 23:55:23.784115 1525568 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1127 23:55:23.784172 1525568 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1127 23:55:23.784183 1525568 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1127 23:55:23.784188 1525568 command_runner.go:130] > [crio.api]
	I1127 23:55:23.784194 1525568 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1127 23:55:23.784200 1525568 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1127 23:55:23.784207 1525568 command_runner.go:130] > # IP address on which the stream server will listen.
	I1127 23:55:23.784221 1525568 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1127 23:55:23.784238 1525568 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1127 23:55:23.784246 1525568 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1127 23:55:23.784254 1525568 command_runner.go:130] > # stream_port = "0"
	I1127 23:55:23.784261 1525568 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1127 23:55:23.784271 1525568 command_runner.go:130] > # stream_enable_tls = false
	I1127 23:55:23.784279 1525568 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1127 23:55:23.784423 1525568 command_runner.go:130] > # stream_idle_timeout = ""
	I1127 23:55:23.784436 1525568 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1127 23:55:23.784450 1525568 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1127 23:55:23.784455 1525568 command_runner.go:130] > # minutes.
	I1127 23:55:23.784584 1525568 command_runner.go:130] > # stream_tls_cert = ""
	I1127 23:55:23.784596 1525568 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1127 23:55:23.784608 1525568 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1127 23:55:23.784748 1525568 command_runner.go:130] > # stream_tls_key = ""
	I1127 23:55:23.784760 1525568 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1127 23:55:23.784773 1525568 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1127 23:55:23.784780 1525568 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1127 23:55:23.786164 1525568 command_runner.go:130] > # stream_tls_ca = ""
	I1127 23:55:23.786184 1525568 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1127 23:55:23.786190 1525568 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1127 23:55:23.786211 1525568 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1127 23:55:23.786226 1525568 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1127 23:55:23.786241 1525568 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1127 23:55:23.786251 1525568 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1127 23:55:23.786257 1525568 command_runner.go:130] > [crio.runtime]
	I1127 23:55:23.786269 1525568 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1127 23:55:23.786294 1525568 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1127 23:55:23.786304 1525568 command_runner.go:130] > # "nofile=1024:2048"
	I1127 23:55:23.786321 1525568 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1127 23:55:23.786332 1525568 command_runner.go:130] > # default_ulimits = [
	I1127 23:55:23.786337 1525568 command_runner.go:130] > # ]
	I1127 23:55:23.786346 1525568 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1127 23:55:23.786354 1525568 command_runner.go:130] > # no_pivot = false
	I1127 23:55:23.786361 1525568 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1127 23:55:23.786373 1525568 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1127 23:55:23.786379 1525568 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1127 23:55:23.786397 1525568 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1127 23:55:23.786410 1525568 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1127 23:55:23.786419 1525568 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1127 23:55:23.786439 1525568 command_runner.go:130] > # conmon = ""
	I1127 23:55:23.786446 1525568 command_runner.go:130] > # Cgroup setting for conmon
	I1127 23:55:23.786459 1525568 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1127 23:55:23.786465 1525568 command_runner.go:130] > conmon_cgroup = "pod"
	I1127 23:55:23.786473 1525568 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1127 23:55:23.786484 1525568 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1127 23:55:23.786493 1525568 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1127 23:55:23.786518 1525568 command_runner.go:130] > # conmon_env = [
	I1127 23:55:23.786528 1525568 command_runner.go:130] > # ]
	I1127 23:55:23.786536 1525568 command_runner.go:130] > # Additional environment variables to set for all the
	I1127 23:55:23.786546 1525568 command_runner.go:130] > # containers. These are overridden if set in the
	I1127 23:55:23.786553 1525568 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1127 23:55:23.786559 1525568 command_runner.go:130] > # default_env = [
	I1127 23:55:23.786566 1525568 command_runner.go:130] > # ]
	I1127 23:55:23.786573 1525568 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1127 23:55:23.786582 1525568 command_runner.go:130] > # selinux = false
	I1127 23:55:23.786599 1525568 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1127 23:55:23.786613 1525568 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1127 23:55:23.786620 1525568 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1127 23:55:23.786625 1525568 command_runner.go:130] > # seccomp_profile = ""
	I1127 23:55:23.786641 1525568 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1127 23:55:23.786657 1525568 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1127 23:55:23.786665 1525568 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1127 23:55:23.786694 1525568 command_runner.go:130] > # which might increase security.
	I1127 23:55:23.786705 1525568 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1127 23:55:23.786714 1525568 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1127 23:55:23.786725 1525568 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1127 23:55:23.786733 1525568 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1127 23:55:23.786743 1525568 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1127 23:55:23.786752 1525568 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:55:23.786758 1525568 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1127 23:55:23.786774 1525568 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1127 23:55:23.786787 1525568 command_runner.go:130] > # the cgroup blockio controller.
	I1127 23:55:23.786813 1525568 command_runner.go:130] > # blockio_config_file = ""
	I1127 23:55:23.786826 1525568 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1127 23:55:23.786831 1525568 command_runner.go:130] > # irqbalance daemon.
	I1127 23:55:23.786859 1525568 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1127 23:55:23.786873 1525568 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1127 23:55:23.786880 1525568 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:55:23.786886 1525568 command_runner.go:130] > # rdt_config_file = ""
	I1127 23:55:23.786904 1525568 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1127 23:55:23.786911 1525568 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1127 23:55:23.786923 1525568 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1127 23:55:23.786938 1525568 command_runner.go:130] > # separate_pull_cgroup = ""
	I1127 23:55:23.786952 1525568 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1127 23:55:23.786960 1525568 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1127 23:55:23.786965 1525568 command_runner.go:130] > # will be added.
	I1127 23:55:23.786971 1525568 command_runner.go:130] > # default_capabilities = [
	I1127 23:55:23.786990 1525568 command_runner.go:130] > # 	"CHOWN",
	I1127 23:55:23.786995 1525568 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1127 23:55:23.787000 1525568 command_runner.go:130] > # 	"FSETID",
	I1127 23:55:23.787010 1525568 command_runner.go:130] > # 	"FOWNER",
	I1127 23:55:23.787014 1525568 command_runner.go:130] > # 	"SETGID",
	I1127 23:55:23.787019 1525568 command_runner.go:130] > # 	"SETUID",
	I1127 23:55:23.787028 1525568 command_runner.go:130] > # 	"SETPCAP",
	I1127 23:55:23.787033 1525568 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1127 23:55:23.787038 1525568 command_runner.go:130] > # 	"KILL",
	I1127 23:55:23.787042 1525568 command_runner.go:130] > # ]
	I1127 23:55:23.787104 1525568 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1127 23:55:23.787119 1525568 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1127 23:55:23.787125 1525568 command_runner.go:130] > # add_inheritable_capabilities = true
	I1127 23:55:23.787132 1525568 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1127 23:55:23.787216 1525568 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1127 23:55:23.787230 1525568 command_runner.go:130] > # default_sysctls = [
	I1127 23:55:23.787237 1525568 command_runner.go:130] > # ]
	I1127 23:55:23.787247 1525568 command_runner.go:130] > # List of devices on the host that a
	I1127 23:55:23.787266 1525568 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1127 23:55:23.787285 1525568 command_runner.go:130] > # allowed_devices = [
	I1127 23:55:23.787291 1525568 command_runner.go:130] > # 	"/dev/fuse",
	I1127 23:55:23.787295 1525568 command_runner.go:130] > # ]
	I1127 23:55:23.787301 1525568 command_runner.go:130] > # List of additional devices. specified as
	I1127 23:55:23.787321 1525568 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1127 23:55:23.787332 1525568 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1127 23:55:23.787340 1525568 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1127 23:55:23.787367 1525568 command_runner.go:130] > # additional_devices = [
	I1127 23:55:23.787380 1525568 command_runner.go:130] > # ]
	I1127 23:55:23.787389 1525568 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1127 23:55:23.787398 1525568 command_runner.go:130] > # cdi_spec_dirs = [
	I1127 23:55:23.787403 1525568 command_runner.go:130] > # 	"/etc/cdi",
	I1127 23:55:23.787408 1525568 command_runner.go:130] > # 	"/var/run/cdi",
	I1127 23:55:23.787416 1525568 command_runner.go:130] > # ]
	I1127 23:55:23.787423 1525568 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1127 23:55:23.787443 1525568 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1127 23:55:23.787454 1525568 command_runner.go:130] > # Defaults to false.
	I1127 23:55:23.787461 1525568 command_runner.go:130] > # device_ownership_from_security_context = false
	I1127 23:55:23.787473 1525568 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1127 23:55:23.787481 1525568 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1127 23:55:23.787489 1525568 command_runner.go:130] > # hooks_dir = [
	I1127 23:55:23.787496 1525568 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1127 23:55:23.787504 1525568 command_runner.go:130] > # ]
	I1127 23:55:23.787520 1525568 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1127 23:55:23.787537 1525568 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1127 23:55:23.787545 1525568 command_runner.go:130] > # its default mounts from the following two files:
	I1127 23:55:23.787549 1525568 command_runner.go:130] > #
	I1127 23:55:23.787557 1525568 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1127 23:55:23.787573 1525568 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1127 23:55:23.787596 1525568 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1127 23:55:23.787601 1525568 command_runner.go:130] > #
	I1127 23:55:23.787609 1525568 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1127 23:55:23.787617 1525568 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1127 23:55:23.787632 1525568 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1127 23:55:23.787639 1525568 command_runner.go:130] > #      only add mounts it finds in this file.
	I1127 23:55:23.787690 1525568 command_runner.go:130] > #
	I1127 23:55:23.787701 1525568 command_runner.go:130] > # default_mounts_file = ""
	I1127 23:55:23.787708 1525568 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1127 23:55:23.787733 1525568 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1127 23:55:23.787744 1525568 command_runner.go:130] > # pids_limit = 0
	I1127 23:55:23.787752 1525568 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1127 23:55:23.787760 1525568 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1127 23:55:23.787768 1525568 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1127 23:55:23.787780 1525568 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1127 23:55:23.787785 1525568 command_runner.go:130] > # log_size_max = -1
	I1127 23:55:23.787813 1525568 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1127 23:55:23.787824 1525568 command_runner.go:130] > # log_to_journald = false
	I1127 23:55:23.787832 1525568 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1127 23:55:23.787839 1525568 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1127 23:55:23.787848 1525568 command_runner.go:130] > # Path to directory for container attach sockets.
	I1127 23:55:23.787854 1525568 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1127 23:55:23.787863 1525568 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1127 23:55:23.787870 1525568 command_runner.go:130] > # bind_mount_prefix = ""
	I1127 23:55:23.787877 1525568 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1127 23:55:23.787893 1525568 command_runner.go:130] > # read_only = false
	I1127 23:55:23.787937 1525568 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1127 23:55:23.787959 1525568 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1127 23:55:23.787967 1525568 command_runner.go:130] > # live configuration reload.
	I1127 23:55:23.787973 1525568 command_runner.go:130] > # log_level = "info"
	I1127 23:55:23.787996 1525568 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1127 23:55:23.788018 1525568 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:55:23.788024 1525568 command_runner.go:130] > # log_filter = ""
	I1127 23:55:23.788035 1525568 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1127 23:55:23.788043 1525568 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1127 23:55:23.788051 1525568 command_runner.go:130] > # separated by comma.
	I1127 23:55:23.788056 1525568 command_runner.go:130] > # uid_mappings = ""
	I1127 23:55:23.788064 1525568 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1127 23:55:23.788072 1525568 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1127 23:55:23.788090 1525568 command_runner.go:130] > # separated by comma.
	I1127 23:55:23.788112 1525568 command_runner.go:130] > # gid_mappings = ""
	I1127 23:55:23.788121 1525568 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1127 23:55:23.788132 1525568 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1127 23:55:23.788142 1525568 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1127 23:55:23.788151 1525568 command_runner.go:130] > # minimum_mappable_uid = -1
	I1127 23:55:23.788159 1525568 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1127 23:55:23.788166 1525568 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1127 23:55:23.788190 1525568 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1127 23:55:23.788204 1525568 command_runner.go:130] > # minimum_mappable_gid = -1
	I1127 23:55:23.788217 1525568 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1127 23:55:23.788224 1525568 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1127 23:55:23.788234 1525568 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1127 23:55:23.788239 1525568 command_runner.go:130] > # ctr_stop_timeout = 30
	I1127 23:55:23.788246 1525568 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1127 23:55:23.788256 1525568 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1127 23:55:23.788262 1525568 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1127 23:55:23.788268 1525568 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1127 23:55:23.788289 1525568 command_runner.go:130] > # drop_infra_ctr = true
	I1127 23:55:23.788306 1525568 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1127 23:55:23.788319 1525568 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1127 23:55:23.788329 1525568 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1127 23:55:23.788334 1525568 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1127 23:55:23.788344 1525568 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1127 23:55:23.788358 1525568 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1127 23:55:23.788364 1525568 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1127 23:55:23.788385 1525568 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1127 23:55:23.788396 1525568 command_runner.go:130] > # pinns_path = ""
	I1127 23:55:23.788405 1525568 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1127 23:55:23.788421 1525568 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1127 23:55:23.788436 1525568 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1127 23:55:23.788442 1525568 command_runner.go:130] > # default_runtime = "runc"
	I1127 23:55:23.788451 1525568 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1127 23:55:23.788460 1525568 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1127 23:55:23.788474 1525568 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1127 23:55:23.788481 1525568 command_runner.go:130] > # creation as a file is not desired either.
	I1127 23:55:23.788504 1525568 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1127 23:55:23.788520 1525568 command_runner.go:130] > # the hostname is being managed dynamically.
	I1127 23:55:23.788535 1525568 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1127 23:55:23.788547 1525568 command_runner.go:130] > # ]
	I1127 23:55:23.788555 1525568 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1127 23:55:23.788563 1525568 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1127 23:55:23.788576 1525568 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1127 23:55:23.788584 1525568 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1127 23:55:23.788590 1525568 command_runner.go:130] > #
	I1127 23:55:23.788596 1525568 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1127 23:55:23.788621 1525568 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1127 23:55:23.788633 1525568 command_runner.go:130] > #  runtime_type = "oci"
	I1127 23:55:23.788641 1525568 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1127 23:55:23.788647 1525568 command_runner.go:130] > #  privileged_without_host_devices = false
	I1127 23:55:23.788655 1525568 command_runner.go:130] > #  allowed_annotations = []
	I1127 23:55:23.788659 1525568 command_runner.go:130] > # Where:
	I1127 23:55:23.788668 1525568 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1127 23:55:23.788676 1525568 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1127 23:55:23.788695 1525568 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1127 23:55:23.788709 1525568 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1127 23:55:23.788714 1525568 command_runner.go:130] > #   in $PATH.
	I1127 23:55:23.788732 1525568 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1127 23:55:23.788744 1525568 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1127 23:55:23.788752 1525568 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1127 23:55:23.788760 1525568 command_runner.go:130] > #   state.
	I1127 23:55:23.788802 1525568 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1127 23:55:23.788823 1525568 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1127 23:55:23.788832 1525568 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1127 23:55:23.788838 1525568 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1127 23:55:23.788846 1525568 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1127 23:55:23.788855 1525568 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1127 23:55:23.788876 1525568 command_runner.go:130] > #   The currently recognized values are:
	I1127 23:55:23.788901 1525568 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1127 23:55:23.788917 1525568 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1127 23:55:23.788933 1525568 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1127 23:55:23.788948 1525568 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1127 23:55:23.788959 1525568 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1127 23:55:23.788972 1525568 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1127 23:55:23.788980 1525568 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1127 23:55:23.788991 1525568 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1127 23:55:23.789006 1525568 command_runner.go:130] > #   should be moved to the container's cgroup
	I1127 23:55:23.789016 1525568 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1127 23:55:23.789031 1525568 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1127 23:55:23.789043 1525568 command_runner.go:130] > runtime_type = "oci"
	I1127 23:55:23.789049 1525568 command_runner.go:130] > runtime_root = "/run/runc"
	I1127 23:55:23.789054 1525568 command_runner.go:130] > runtime_config_path = ""
	I1127 23:55:23.789062 1525568 command_runner.go:130] > monitor_path = ""
	I1127 23:55:23.789067 1525568 command_runner.go:130] > monitor_cgroup = ""
	I1127 23:55:23.789073 1525568 command_runner.go:130] > monitor_exec_cgroup = ""
	I1127 23:55:23.789124 1525568 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1127 23:55:23.789137 1525568 command_runner.go:130] > # running containers
	I1127 23:55:23.789143 1525568 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1127 23:55:23.789160 1525568 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1127 23:55:23.789174 1525568 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1127 23:55:23.789181 1525568 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1127 23:55:23.789191 1525568 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1127 23:55:23.789197 1525568 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1127 23:55:23.789205 1525568 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1127 23:55:23.789211 1525568 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1127 23:55:23.789220 1525568 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1127 23:55:23.789236 1525568 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1127 23:55:23.789251 1525568 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1127 23:55:23.789258 1525568 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1127 23:55:23.789276 1525568 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1127 23:55:23.789292 1525568 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1127 23:55:23.789301 1525568 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1127 23:55:23.789312 1525568 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1127 23:55:23.789324 1525568 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1127 23:55:23.789345 1525568 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1127 23:55:23.789361 1525568 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1127 23:55:23.789379 1525568 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1127 23:55:23.789390 1525568 command_runner.go:130] > # Example:
	I1127 23:55:23.789398 1525568 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1127 23:55:23.789408 1525568 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1127 23:55:23.789414 1525568 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1127 23:55:23.789421 1525568 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1127 23:55:23.789428 1525568 command_runner.go:130] > # cpuset = 0
	I1127 23:55:23.789433 1525568 command_runner.go:130] > # cpushares = "0-1"
	I1127 23:55:23.789437 1525568 command_runner.go:130] > # Where:
	I1127 23:55:23.789464 1525568 command_runner.go:130] > # The workload name is workload-type.
	I1127 23:55:23.789480 1525568 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1127 23:55:23.789488 1525568 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1127 23:55:23.789498 1525568 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1127 23:55:23.789510 1525568 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1127 23:55:23.789518 1525568 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1127 23:55:23.789525 1525568 command_runner.go:130] > # 
	I1127 23:55:23.789542 1525568 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1127 23:55:23.789552 1525568 command_runner.go:130] > #
	I1127 23:55:23.789567 1525568 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1127 23:55:23.789582 1525568 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1127 23:55:23.789590 1525568 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1127 23:55:23.789601 1525568 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1127 23:55:23.789647 1525568 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1127 23:55:23.789667 1525568 command_runner.go:130] > [crio.image]
	I1127 23:55:23.789675 1525568 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1127 23:55:23.789680 1525568 command_runner.go:130] > # default_transport = "docker://"
	I1127 23:55:23.789697 1525568 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1127 23:55:23.789710 1525568 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1127 23:55:23.789716 1525568 command_runner.go:130] > # global_auth_file = ""
	I1127 23:55:23.789741 1525568 command_runner.go:130] > # The image used to instantiate infra containers.
	I1127 23:55:23.789756 1525568 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:55:23.789772 1525568 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1127 23:55:23.789787 1525568 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1127 23:55:23.789795 1525568 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1127 23:55:23.789802 1525568 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:55:23.789809 1525568 command_runner.go:130] > # pause_image_auth_file = ""
	I1127 23:55:23.789817 1525568 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1127 23:55:23.789826 1525568 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1127 23:55:23.789836 1525568 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1127 23:55:23.789877 1525568 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1127 23:55:23.789890 1525568 command_runner.go:130] > # pause_command = "/pause"
	I1127 23:55:23.789907 1525568 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1127 23:55:23.789920 1525568 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1127 23:55:23.789929 1525568 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1127 23:55:23.789938 1525568 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1127 23:55:23.789951 1525568 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1127 23:55:23.789956 1525568 command_runner.go:130] > # signature_policy = ""
	I1127 23:55:23.789967 1525568 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1127 23:55:23.789984 1525568 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1127 23:55:23.789997 1525568 command_runner.go:130] > # changing them here.
	I1127 23:55:23.790003 1525568 command_runner.go:130] > # insecure_registries = [
	I1127 23:55:23.790018 1525568 command_runner.go:130] > # ]
	I1127 23:55:23.790034 1525568 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1127 23:55:23.790041 1525568 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1127 23:55:23.790050 1525568 command_runner.go:130] > # image_volumes = "mkdir"
	I1127 23:55:23.790058 1525568 command_runner.go:130] > # Temporary directory to use for storing big files
	I1127 23:55:23.790067 1525568 command_runner.go:130] > # big_files_temporary_dir = ""
	I1127 23:55:23.790075 1525568 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1127 23:55:23.790093 1525568 command_runner.go:130] > # CNI plugins.
	I1127 23:55:23.790102 1525568 command_runner.go:130] > [crio.network]
	I1127 23:55:23.790110 1525568 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1127 23:55:23.790123 1525568 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1127 23:55:23.790128 1525568 command_runner.go:130] > # cni_default_network = ""
	I1127 23:55:23.790135 1525568 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1127 23:55:23.790144 1525568 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1127 23:55:23.790152 1525568 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1127 23:55:23.790170 1525568 command_runner.go:130] > # plugin_dirs = [
	I1127 23:55:23.790188 1525568 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1127 23:55:23.790197 1525568 command_runner.go:130] > # ]
	I1127 23:55:23.790204 1525568 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1127 23:55:23.790213 1525568 command_runner.go:130] > [crio.metrics]
	I1127 23:55:23.790219 1525568 command_runner.go:130] > # Globally enable or disable metrics support.
	I1127 23:55:23.790225 1525568 command_runner.go:130] > # enable_metrics = false
	I1127 23:55:23.790233 1525568 command_runner.go:130] > # Specify enabled metrics collectors.
	I1127 23:55:23.790239 1525568 command_runner.go:130] > # Per default all metrics are enabled.
	I1127 23:55:23.790267 1525568 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1127 23:55:23.790280 1525568 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1127 23:55:23.790288 1525568 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1127 23:55:23.790295 1525568 command_runner.go:130] > # metrics_collectors = [
	I1127 23:55:23.790300 1525568 command_runner.go:130] > # 	"operations",
	I1127 23:55:23.790309 1525568 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1127 23:55:23.790315 1525568 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1127 23:55:23.790323 1525568 command_runner.go:130] > # 	"operations_errors",
	I1127 23:55:23.790328 1525568 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1127 23:55:23.790345 1525568 command_runner.go:130] > # 	"image_pulls_by_name",
	I1127 23:55:23.790356 1525568 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1127 23:55:23.790362 1525568 command_runner.go:130] > # 	"image_pulls_failures",
	I1127 23:55:23.790367 1525568 command_runner.go:130] > # 	"image_pulls_successes",
	I1127 23:55:23.790375 1525568 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1127 23:55:23.790382 1525568 command_runner.go:130] > # 	"image_layer_reuse",
	I1127 23:55:23.790388 1525568 command_runner.go:130] > # 	"containers_oom_total",
	I1127 23:55:23.790396 1525568 command_runner.go:130] > # 	"containers_oom",
	I1127 23:55:23.790401 1525568 command_runner.go:130] > # 	"processes_defunct",
	I1127 23:55:23.790407 1525568 command_runner.go:130] > # 	"operations_total",
	I1127 23:55:23.790431 1525568 command_runner.go:130] > # 	"operations_latency_seconds",
	I1127 23:55:23.790443 1525568 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1127 23:55:23.790449 1525568 command_runner.go:130] > # 	"operations_errors_total",
	I1127 23:55:23.790454 1525568 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1127 23:55:23.790461 1525568 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1127 23:55:23.790469 1525568 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1127 23:55:23.790474 1525568 command_runner.go:130] > # 	"image_pulls_success_total",
	I1127 23:55:23.790480 1525568 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1127 23:55:23.790487 1525568 command_runner.go:130] > # 	"containers_oom_count_total",
	I1127 23:55:23.790492 1525568 command_runner.go:130] > # ]
	I1127 23:55:23.790504 1525568 command_runner.go:130] > # The port on which the metrics server will listen.
	I1127 23:55:23.790511 1525568 command_runner.go:130] > # metrics_port = 9090
	I1127 23:55:23.790517 1525568 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1127 23:55:23.790522 1525568 command_runner.go:130] > # metrics_socket = ""
	I1127 23:55:23.790529 1525568 command_runner.go:130] > # The certificate for the secure metrics server.
	I1127 23:55:23.790541 1525568 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1127 23:55:23.790549 1525568 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1127 23:55:23.790560 1525568 command_runner.go:130] > # certificate on any modification event.
	I1127 23:55:23.790565 1525568 command_runner.go:130] > # metrics_cert = ""
	I1127 23:55:23.790574 1525568 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1127 23:55:23.790580 1525568 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1127 23:55:23.790586 1525568 command_runner.go:130] > # metrics_key = ""
	I1127 23:55:23.790620 1525568 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1127 23:55:23.790629 1525568 command_runner.go:130] > [crio.tracing]
	I1127 23:55:23.790650 1525568 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1127 23:55:23.790657 1525568 command_runner.go:130] > # enable_tracing = false
	I1127 23:55:23.790663 1525568 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1127 23:55:23.790669 1525568 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1127 23:55:23.790676 1525568 command_runner.go:130] > # Number of samples to collect per million spans.
	I1127 23:55:23.790682 1525568 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1127 23:55:23.790689 1525568 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1127 23:55:23.790694 1525568 command_runner.go:130] > [crio.stats]
	I1127 23:55:23.790701 1525568 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1127 23:55:23.790725 1525568 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1127 23:55:23.790737 1525568 command_runner.go:130] > # stats_collection_period = 0
	I1127 23:55:23.792676 1525568 command_runner.go:130] ! time="2023-11-27 23:55:23.780956118Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1127 23:55:23.792700 1525568 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1127 23:55:23.792759 1525568 cni.go:84] Creating CNI manager for ""
	I1127 23:55:23.792766 1525568 cni.go:136] 2 nodes found, recommending kindnet
	I1127 23:55:23.792776 1525568 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1127 23:55:23.792796 1525568 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-784312 NodeName:multinode-784312-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1127 23:55:23.792917 1525568 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-784312-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1127 23:55:23.792971 1525568 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-784312-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-784312 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1127 23:55:23.793035 1525568 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1127 23:55:23.803163 1525568 command_runner.go:130] > kubeadm
	I1127 23:55:23.803188 1525568 command_runner.go:130] > kubectl
	I1127 23:55:23.803194 1525568 command_runner.go:130] > kubelet
	I1127 23:55:23.804348 1525568 binaries.go:44] Found k8s binaries, skipping transfer
	I1127 23:55:23.804413 1525568 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1127 23:55:23.818377 1525568 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1127 23:55:23.841267 1525568 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1127 23:55:23.863769 1525568 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1127 23:55:23.868438 1525568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:55:23.881566 1525568 host.go:66] Checking if "multinode-784312" exists ...
	I1127 23:55:23.881831 1525568 start.go:304] JoinCluster: &{Name:multinode-784312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-784312 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:55:23.881957 1525568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1127 23:55:23.882009 1525568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-784312
	I1127 23:55:23.882317 1525568 config.go:182] Loaded profile config "multinode-784312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:55:23.900361 1525568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34144 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/multinode-784312/id_rsa Username:docker}
	I1127 23:55:24.075553 1525568 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token dvcj7v.cs5tvn1j3thlj832 --discovery-token-ca-cert-hash sha256:4bf7580b26a5e006cb8545a36d546acc708a5c0d8ea7cd28dd99f58e9fcb6509 
	I1127 23:55:24.075596 1525568 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1127 23:55:24.075625 1525568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dvcj7v.cs5tvn1j3thlj832 --discovery-token-ca-cert-hash sha256:4bf7580b26a5e006cb8545a36d546acc708a5c0d8ea7cd28dd99f58e9fcb6509 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-784312-m02"
	I1127 23:55:24.128176 1525568 command_runner.go:130] > [preflight] Running pre-flight checks
	I1127 23:55:24.173849 1525568 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1127 23:55:24.173906 1525568 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1050-aws
	I1127 23:55:24.173913 1525568 command_runner.go:130] > OS: Linux
	I1127 23:55:24.173920 1525568 command_runner.go:130] > CGROUPS_CPU: enabled
	I1127 23:55:24.173931 1525568 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1127 23:55:24.173937 1525568 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1127 23:55:24.173944 1525568 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1127 23:55:24.173955 1525568 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1127 23:55:24.173962 1525568 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1127 23:55:24.173979 1525568 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1127 23:55:24.173986 1525568 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1127 23:55:24.173996 1525568 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1127 23:55:24.286519 1525568 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1127 23:55:24.286543 1525568 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1127 23:55:24.319886 1525568 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1127 23:55:24.320141 1525568 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1127 23:55:24.320286 1525568 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1127 23:55:24.431078 1525568 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1127 23:55:26.944706 1525568 command_runner.go:130] > This node has joined the cluster:
	I1127 23:55:26.944728 1525568 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1127 23:55:26.944741 1525568 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1127 23:55:26.944750 1525568 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1127 23:55:26.948068 1525568 command_runner.go:130] ! W1127 23:55:24.127610    1024 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1127 23:55:26.948098 1525568 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1050-aws\n", err: exit status 1
	I1127 23:55:26.948114 1525568 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1127 23:55:26.948139 1525568 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dvcj7v.cs5tvn1j3thlj832 --discovery-token-ca-cert-hash sha256:4bf7580b26a5e006cb8545a36d546acc708a5c0d8ea7cd28dd99f58e9fcb6509 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-784312-m02": (2.872502168s)
	I1127 23:55:26.948160 1525568 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1127 23:55:27.207912 1525568 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1127 23:55:27.207938 1525568 start.go:306] JoinCluster complete in 3.326107363s
	I1127 23:55:27.207951 1525568 cni.go:84] Creating CNI manager for ""
	I1127 23:55:27.207957 1525568 cni.go:136] 2 nodes found, recommending kindnet
	I1127 23:55:27.208006 1525568 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1127 23:55:27.218448 1525568 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1127 23:55:27.218478 1525568 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1127 23:55:27.218487 1525568 command_runner.go:130] > Device: 36h/54d	Inode: 5712268     Links: 1
	I1127 23:55:27.218495 1525568 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1127 23:55:27.218517 1525568 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1127 23:55:27.218529 1525568 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1127 23:55:27.218539 1525568 command_runner.go:130] > Change: 2023-11-27 23:30:32.626003656 +0000
	I1127 23:55:27.218549 1525568 command_runner.go:130] >  Birth: 2023-11-27 23:30:32.582004044 +0000
	I1127 23:55:27.219307 1525568 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1127 23:55:27.219330 1525568 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1127 23:55:27.254844 1525568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1127 23:55:27.616656 1525568 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1127 23:55:27.623821 1525568 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1127 23:55:27.629690 1525568 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1127 23:55:27.648992 1525568 command_runner.go:130] > daemonset.apps/kindnet configured
	I1127 23:55:27.655040 1525568 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17206-1455288/kubeconfig
	I1127 23:55:27.655353 1525568 kapi.go:59] client config for multinode-784312: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/client.key", CAFile:"/home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:55:27.655666 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1127 23:55:27.655681 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:27.655691 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:27.655698 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:27.662675 1525568 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1127 23:55:27.662702 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:27.662711 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:27.662718 1525568 round_trippers.go:580]     Content-Length: 291
	I1127 23:55:27.662725 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:27 GMT
	I1127 23:55:27.662731 1525568 round_trippers.go:580]     Audit-Id: ba150634-4761-4b87-ba01-6e468b6b9d44
	I1127 23:55:27.662737 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:27.662744 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:27.662757 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:27.662779 1525568 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a3cec32e-d838-4f12-bc00-b18b4198854e","resourceVersion":"422","creationTimestamp":"2023-11-27T23:54:22Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1127 23:55:27.662868 1525568 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-784312" context rescaled to 1 replicas
	I1127 23:55:27.662905 1525568 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1127 23:55:27.665363 1525568 out.go:177] * Verifying Kubernetes components...
	I1127 23:55:27.667348 1525568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:55:27.684737 1525568 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17206-1455288/kubeconfig
	I1127 23:55:27.685059 1525568 kapi.go:59] client config for multinode-784312: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/multinode-784312/client.key", CAFile:"/home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:55:27.685388 1525568 node_ready.go:35] waiting up to 6m0s for node "multinode-784312-m02" to be "Ready" ...
	I1127 23:55:27.685483 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:27.685495 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:27.685513 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:27.685526 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:27.689781 1525568 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1127 23:55:27.689847 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:27.689896 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:27 GMT
	I1127 23:55:27.689922 1525568 round_trippers.go:580]     Audit-Id: 100d81b0-c6e9-4742-b4dd-f31054f662b2
	I1127 23:55:27.689943 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:27.689963 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:27.689980 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:27.689998 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:27.690664 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"461","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1127 23:55:27.691183 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:27.691234 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:27.691265 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:27.691286 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:27.694574 1525568 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:27.694635 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:27.694657 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:27 GMT
	I1127 23:55:27.694687 1525568 round_trippers.go:580]     Audit-Id: 26b8fb58-7a2f-493e-a579-532e0231124d
	I1127 23:55:27.694708 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:27.694728 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:27.694747 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:27.694777 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:27.695349 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"461","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1127 23:55:28.196014 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:28.196036 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:28.196046 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:28.196053 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:28.199115 1525568 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:28.199142 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:28.199151 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:28.199158 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:28.199165 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:28 GMT
	I1127 23:55:28.199173 1525568 round_trippers.go:580]     Audit-Id: 06a8264e-51f8-4006-965a-2eb2774aaa42
	I1127 23:55:28.199180 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:28.199186 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:28.199371 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"461","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1127 23:55:28.695954 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:28.695979 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:28.695989 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:28.695996 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:28.698516 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:28.698540 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:28.698549 1525568 round_trippers.go:580]     Audit-Id: 29f009d6-ffe2-40dc-8ca9-faa2de85eece
	I1127 23:55:28.698556 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:28.698562 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:28.698569 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:28.698575 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:28.698584 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:28 GMT
	I1127 23:55:28.698685 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"461","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1127 23:55:29.195981 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:29.196009 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:29.196019 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:29.196027 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:29.198690 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:29.198719 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:29.198729 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:29.198737 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:29.198743 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:29.198750 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:29.198757 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:29 GMT
	I1127 23:55:29.198773 1525568 round_trippers.go:580]     Audit-Id: bd644438-6951-4f89-814a-7568465e9337
	I1127 23:55:29.198897 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"461","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1127 23:55:29.695980 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:29.696004 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:29.696013 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:29.696020 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:29.698553 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:29.698576 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:29.698585 1525568 round_trippers.go:580]     Audit-Id: a88eee15-527a-4276-9c8b-53d6d4341e3d
	I1127 23:55:29.698591 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:29.698598 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:29.698603 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:29.698611 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:29.698617 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:29 GMT
	I1127 23:55:29.698710 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"461","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1127 23:55:29.699070 1525568 node_ready.go:58] node "multinode-784312-m02" has status "Ready":"False"
	I1127 23:55:30.196928 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:30.196956 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:30.196978 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:30.196987 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:30.199646 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:30.199673 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:30.199683 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:30 GMT
	I1127 23:55:30.199690 1525568 round_trippers.go:580]     Audit-Id: b8f10ecb-9d5d-4ca5-aa35-a149751ba763
	I1127 23:55:30.199701 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:30.199709 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:30.199715 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:30.199722 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:30.199942 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"461","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1127 23:55:30.696547 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:30.696571 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:30.696581 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:30.696588 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:30.699061 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:30.699116 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:30.699137 1525568 round_trippers.go:580]     Audit-Id: 49f54b48-878a-484d-8a55-dd05a6fbf27e
	I1127 23:55:30.699157 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:30.699188 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:30.699210 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:30.699229 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:30.699282 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:30 GMT
	I1127 23:55:30.699388 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"461","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1127 23:55:31.196902 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:31.196928 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:31.196938 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:31.196947 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:31.199564 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:31.199590 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:31.199600 1525568 round_trippers.go:580]     Audit-Id: 978e40c2-d90a-43e6-bc70-21f10664ced9
	I1127 23:55:31.199606 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:31.199612 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:31.199620 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:31.199627 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:31.199633 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:31 GMT
	I1127 23:55:31.199825 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"477","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1127 23:55:31.696573 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:31.698701 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:31.698739 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:31.698754 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:31.701364 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:31.701385 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:31.701394 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:31 GMT
	I1127 23:55:31.701400 1525568 round_trippers.go:580]     Audit-Id: 1be057a5-9326-4682-9e9a-d4aa0743ed7b
	I1127 23:55:31.701406 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:31.701412 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:31.701418 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:31.701424 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:31.701572 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"477","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1127 23:55:31.701986 1525568 node_ready.go:58] node "multinode-784312-m02" has status "Ready":"False"
	I1127 23:55:32.196057 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:32.196082 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:32.196092 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:32.196099 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:32.198570 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:32.198611 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:32.198619 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:32.198626 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:32 GMT
	I1127 23:55:32.198633 1525568 round_trippers.go:580]     Audit-Id: 98567bec-e4b3-4e49-8e7e-dcb74f7546b0
	I1127 23:55:32.198640 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:32.198647 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:32.198658 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:32.198927 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"477","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1127 23:55:32.695958 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:32.695990 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:32.696004 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:32.696012 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:32.698587 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:32.698653 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:32.698675 1525568 round_trippers.go:580]     Audit-Id: 837149ac-9315-4a7b-8379-46426b6ddd4c
	I1127 23:55:32.698694 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:32.698724 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:32.698747 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:32.698762 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:32.698768 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:32 GMT
	I1127 23:55:32.698898 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"477","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1127 23:55:33.195982 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:33.196005 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:33.196015 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:33.196022 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:33.198839 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:33.198905 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:33.198921 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:33.198934 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:33.198951 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:33.198958 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:33 GMT
	I1127 23:55:33.198971 1525568 round_trippers.go:580]     Audit-Id: d9eb8221-3c53-448d-969a-8784c0cf5861
	I1127 23:55:33.198978 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:33.199634 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"477","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1127 23:55:33.696843 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:33.696868 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:33.696877 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:33.696884 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:33.700380 1525568 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:33.700407 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:33.700416 1525568 round_trippers.go:580]     Audit-Id: a76a16fa-bf7d-49a5-8ed2-5f4ae5f7bd53
	I1127 23:55:33.700422 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:33.700429 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:33.700435 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:33.700441 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:33.700452 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:33 GMT
	I1127 23:55:33.700773 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"477","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1127 23:55:34.196899 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:34.196922 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:34.196932 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:34.196948 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:34.199459 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:34.199485 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:34.199495 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:34 GMT
	I1127 23:55:34.199502 1525568 round_trippers.go:580]     Audit-Id: 09868c5a-3138-4270-bd42-5ab2c26dd50a
	I1127 23:55:34.199508 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:34.199514 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:34.199522 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:34.199531 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:34.199936 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"477","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1127 23:55:34.200308 1525568 node_ready.go:58] node "multinode-784312-m02" has status "Ready":"False"
	I1127 23:55:34.695986 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:34.696013 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:34.696023 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:34.696032 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:34.698755 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:34.698776 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:34.698785 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:34.698792 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:34.698798 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:34.698804 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:34 GMT
	I1127 23:55:34.698811 1525568 round_trippers.go:580]     Audit-Id: 0e32faa6-db1e-4afe-b54d-52394fd23f66
	I1127 23:55:34.698817 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:34.699191 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"477","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1127 23:55:35.196213 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:35.196237 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:35.196247 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:35.196254 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:35.198656 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:35.198681 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:35.198690 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:35 GMT
	I1127 23:55:35.198697 1525568 round_trippers.go:580]     Audit-Id: 60625e51-f7aa-480a-b00b-d52f0c0aca15
	I1127 23:55:35.198703 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:35.198709 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:35.198715 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:35.198726 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:35.198964 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"477","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1127 23:55:35.696015 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:35.696040 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:35.696050 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:35.696057 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:35.698622 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:35.698643 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:35.698651 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:35.698658 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:35.698664 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:35 GMT
	I1127 23:55:35.698670 1525568 round_trippers.go:580]     Audit-Id: 9fadf38f-e976-4d60-abd4-fc483f992d34
	I1127 23:55:35.698678 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:35.698684 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:35.698787 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"477","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1127 23:55:36.196475 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:36.196502 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:36.196512 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:36.196523 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:36.199023 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:36.199046 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:36.199054 1525568 round_trippers.go:580]     Audit-Id: 0cba77ef-f6c3-44e4-bb52-fc90a7886683
	I1127 23:55:36.199061 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:36.199067 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:36.199074 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:36.199080 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:36.199087 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:36 GMT
	I1127 23:55:36.199193 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"477","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1127 23:55:36.696835 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:36.698910 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:36.698926 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:36.698935 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:36.701389 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:36.701433 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:36.701442 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:36.701453 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:36 GMT
	I1127 23:55:36.701460 1525568 round_trippers.go:580]     Audit-Id: 6635752e-70c1-47b9-ae77-0fd296a07385
	I1127 23:55:36.701515 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:36.701529 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:36.701536 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:36.701657 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"477","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1127 23:55:36.702067 1525568 node_ready.go:58] node "multinode-784312-m02" has status "Ready":"False"
	I1127 23:55:37.196536 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:37.196565 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:37.196575 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:37.196583 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:37.199175 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:37.199201 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:37.199210 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:37.199217 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:37.199224 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:37 GMT
	I1127 23:55:37.199230 1525568 round_trippers.go:580]     Audit-Id: 6d8bfea7-80ee-4a3a-8909-fe0e72cb6f83
	I1127 23:55:37.199237 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:37.199248 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:37.199400 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:37.696530 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:37.696552 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:37.696562 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:37.696569 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:37.699026 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:37.699052 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:37.699061 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:37.699069 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:37.699076 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:37.699082 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:37.699096 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:37 GMT
	I1127 23:55:37.699102 1525568 round_trippers.go:580]     Audit-Id: a82c2a99-b31c-4c67-9991-88a4c80443ed
	I1127 23:55:37.699318 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:38.196134 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:38.196157 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:38.196167 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:38.196174 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:38.198650 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:38.198672 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:38.198681 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:38.198688 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:38.198695 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:38 GMT
	I1127 23:55:38.198701 1525568 round_trippers.go:580]     Audit-Id: 2f00f909-d002-44c2-8f84-aefd9948e85b
	I1127 23:55:38.198709 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:38.198716 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:38.199101 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:38.696762 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:38.696786 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:38.696797 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:38.696804 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:38.699108 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:38.699126 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:38.699134 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:38 GMT
	I1127 23:55:38.699141 1525568 round_trippers.go:580]     Audit-Id: dbcc1fd4-5f5e-42ac-8f9a-1175934c1925
	I1127 23:55:38.699147 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:38.699153 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:38.699159 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:38.699165 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:38.699292 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:39.197012 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:39.197041 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:39.197056 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:39.197066 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:39.199493 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:39.199522 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:39.199532 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:39 GMT
	I1127 23:55:39.199538 1525568 round_trippers.go:580]     Audit-Id: eede2eea-a780-43c2-8e0b-2e5a9544b569
	I1127 23:55:39.199545 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:39.199551 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:39.199558 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:39.199569 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:39.199685 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:39.200050 1525568 node_ready.go:58] node "multinode-784312-m02" has status "Ready":"False"
	I1127 23:55:39.696797 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:39.696876 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:39.696886 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:39.696894 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:39.699344 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:39.699368 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:39.699376 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:39.699383 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:39.699389 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:39 GMT
	I1127 23:55:39.699395 1525568 round_trippers.go:580]     Audit-Id: f94adc48-e8fc-4e92-be3e-9ca59891a643
	I1127 23:55:39.699401 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:39.699408 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:39.699512 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:40.196727 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:40.196753 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:40.196763 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:40.196772 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:40.199300 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:40.199323 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:40.199331 1525568 round_trippers.go:580]     Audit-Id: 89a2413e-6f20-45a0-893d-bb6674d0dd29
	I1127 23:55:40.199338 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:40.199344 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:40.199351 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:40.199357 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:40.199364 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:40 GMT
	I1127 23:55:40.199484 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:40.696593 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:40.696619 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:40.696629 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:40.696636 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:40.699354 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:40.699379 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:40.699388 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:40.699395 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:40 GMT
	I1127 23:55:40.699401 1525568 round_trippers.go:580]     Audit-Id: fa5f990b-13ee-4d85-ad09-72ef4923690c
	I1127 23:55:40.699407 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:40.699413 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:40.699419 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:40.699779 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:41.196908 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:41.196931 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:41.196946 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:41.196954 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:41.199345 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:41.199368 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:41.199377 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:41.199384 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:41.199391 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:41.199398 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:41 GMT
	I1127 23:55:41.199404 1525568 round_trippers.go:580]     Audit-Id: 62cd5091-948b-48f9-bbbe-c158197d8705
	I1127 23:55:41.199410 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:41.199553 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:41.696477 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:41.698281 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:41.698297 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:41.698305 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:41.700736 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:41.700756 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:41.700764 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:41.700771 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:41 GMT
	I1127 23:55:41.700778 1525568 round_trippers.go:580]     Audit-Id: a6bdd91e-87e9-44bc-8d1e-bd05c1f4f237
	I1127 23:55:41.700784 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:41.700790 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:41.700797 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:41.700936 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:41.701309 1525568 node_ready.go:58] node "multinode-784312-m02" has status "Ready":"False"
	I1127 23:55:42.196108 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:42.196140 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:42.196152 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:42.196160 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:42.199136 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:42.199165 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:42.199175 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:42.199183 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:42.199189 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:42.199199 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:42.199211 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:42 GMT
	I1127 23:55:42.199218 1525568 round_trippers.go:580]     Audit-Id: f86a65ba-9ed9-4405-bdeb-1105672636bc
	I1127 23:55:42.199367 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:42.696835 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:42.696857 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:42.696867 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:42.696886 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:42.699288 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:42.699311 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:42.699320 1525568 round_trippers.go:580]     Audit-Id: ce881797-986b-4ddd-a35d-1352075766d1
	I1127 23:55:42.699327 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:42.699334 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:42.699340 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:42.699347 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:42.699353 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:42 GMT
	I1127 23:55:42.699439 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:43.196771 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:43.196797 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:43.196807 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:43.196815 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:43.199319 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:43.199340 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:43.199348 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:43.199355 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:43.199362 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:43.199369 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:43 GMT
	I1127 23:55:43.199375 1525568 round_trippers.go:580]     Audit-Id: a72985c2-2b33-464d-93c0-bf08c8cb5680
	I1127 23:55:43.199381 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:43.199491 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:43.696571 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:43.696596 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:43.696606 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:43.696613 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:43.699077 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:43.699104 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:43.699113 1525568 round_trippers.go:580]     Audit-Id: 4449616a-25ef-4941-89fa-90e66e792331
	I1127 23:55:43.699120 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:43.699126 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:43.699132 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:43.699138 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:43.699154 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:43 GMT
	I1127 23:55:43.699366 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:44.196491 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:44.196513 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:44.196523 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:44.196530 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:44.198953 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:44.198979 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:44.198988 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:44.198995 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:44.199001 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:44.199008 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:44 GMT
	I1127 23:55:44.199014 1525568 round_trippers.go:580]     Audit-Id: f13fd674-f28f-4fc3-97ce-6f023e423088
	I1127 23:55:44.199020 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:44.199328 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:44.199710 1525568 node_ready.go:58] node "multinode-784312-m02" has status "Ready":"False"
	I1127 23:55:44.696043 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:44.696067 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:44.696077 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:44.696084 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:44.699105 1525568 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:44.699187 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:44.699218 1525568 round_trippers.go:580]     Audit-Id: e00914c2-eccd-42d3-a905-951da047ea72
	I1127 23:55:44.699253 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:44.699294 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:44.699333 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:44.699372 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:44.699384 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:44 GMT
	I1127 23:55:44.699519 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:45.203234 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:45.203261 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:45.203272 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:45.203279 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:45.214322 1525568 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1127 23:55:45.214356 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:45.214367 1525568 round_trippers.go:580]     Audit-Id: 28f8bc5c-b857-4fc3-9f9e-0e13a8fef857
	I1127 23:55:45.214374 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:45.214381 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:45.214387 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:45.214394 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:45.214403 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:45 GMT
	I1127 23:55:45.214616 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:45.696042 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:45.696067 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:45.696077 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:45.696084 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:45.698674 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:45.698703 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:45.698712 1525568 round_trippers.go:580]     Audit-Id: 52f1d615-5083-4a81-b708-04267896509b
	I1127 23:55:45.698731 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:45.698738 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:45.698744 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:45.698751 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:45.698763 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:45 GMT
	I1127 23:55:45.698879 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:46.196533 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:46.196560 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:46.196571 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:46.196579 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:46.199187 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:46.199219 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:46.199228 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:46.199235 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:46.199242 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:46 GMT
	I1127 23:55:46.199248 1525568 round_trippers.go:580]     Audit-Id: 2fa26181-cb73-4cde-9025-30ed61cf10e4
	I1127 23:55:46.199254 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:46.199260 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:46.199567 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:46.199945 1525568 node_ready.go:58] node "multinode-784312-m02" has status "Ready":"False"
	I1127 23:55:46.696193 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:46.698372 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:46.698386 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:46.698395 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:46.700810 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:46.700829 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:46.700838 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:46.700844 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:46 GMT
	I1127 23:55:46.700850 1525568 round_trippers.go:580]     Audit-Id: e73c46cd-88dc-4f98-9605-e955cce340ae
	I1127 23:55:46.700857 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:46.700864 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:46.700870 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:46.701311 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:47.196508 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:47.196530 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:47.196541 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:47.196548 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:47.201304 1525568 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1127 23:55:47.201325 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:47.201334 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:47.201340 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:47.201347 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:47.201353 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:47 GMT
	I1127 23:55:47.201360 1525568 round_trippers.go:580]     Audit-Id: 95f19891-94c6-4a81-9431-51310150670d
	I1127 23:55:47.201366 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:47.201637 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:47.696756 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:47.696780 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:47.696790 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:47.696797 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:47.699279 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:47.699301 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:47.699309 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:47.699315 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:47.699330 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:47.699339 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:47 GMT
	I1127 23:55:47.699345 1525568 round_trippers.go:580]     Audit-Id: 9ff23583-062c-4c76-a297-22e3ba98483e
	I1127 23:55:47.699351 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:47.699437 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:48.195985 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:48.196012 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:48.196022 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:48.196030 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:48.198578 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:48.198601 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:48.198609 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:48.198616 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:48.198622 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:48.198628 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:48 GMT
	I1127 23:55:48.198634 1525568 round_trippers.go:580]     Audit-Id: d1adb608-7d2e-49d0-9f85-77bf361011c7
	I1127 23:55:48.198640 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:48.198735 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:48.695986 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:48.696011 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:48.696021 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:48.696029 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:48.698475 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:48.698496 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:48.698504 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:48 GMT
	I1127 23:55:48.698511 1525568 round_trippers.go:580]     Audit-Id: b074b75d-f90f-402f-8a6a-76e369c01fe6
	I1127 23:55:48.698517 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:48.698524 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:48.698530 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:48.698537 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:48.698625 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:48.698996 1525568 node_ready.go:58] node "multinode-784312-m02" has status "Ready":"False"
	I1127 23:55:49.196829 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:49.196852 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:49.196862 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:49.196869 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:49.199474 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:49.199495 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:49.199504 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:49.199511 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:49.199518 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:49 GMT
	I1127 23:55:49.199524 1525568 round_trippers.go:580]     Audit-Id: 6b0cad47-833a-43cc-9aef-a0c1d2c8f720
	I1127 23:55:49.199530 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:49.199536 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:49.199671 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:49.696586 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:49.696608 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:49.696618 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:49.696625 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:49.699133 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:49.699159 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:49.699168 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:49 GMT
	I1127 23:55:49.699177 1525568 round_trippers.go:580]     Audit-Id: 57541e05-393e-4eea-afb9-58102f39318f
	I1127 23:55:49.699184 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:49.699190 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:49.699196 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:49.699202 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:49.699300 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:50.195991 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:50.196020 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:50.196032 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:50.196039 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:50.198817 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:50.198843 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:50.198852 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:50.198859 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:50.198865 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:50.198872 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:50 GMT
	I1127 23:55:50.198878 1525568 round_trippers.go:580]     Audit-Id: 760cac8d-4d31-4e44-86fe-6527c761526f
	I1127 23:55:50.198891 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:50.199012 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:50.696792 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:50.696816 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:50.696826 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:50.696834 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:50.699329 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:50.699355 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:50.699363 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:50.699370 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:50.699376 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:50.699382 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:50 GMT
	I1127 23:55:50.699388 1525568 round_trippers.go:580]     Audit-Id: 34c6d8bc-8fde-443b-84e4-1f7c546378ce
	I1127 23:55:50.699395 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:50.699470 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:50.699844 1525568 node_ready.go:58] node "multinode-784312-m02" has status "Ready":"False"
	I1127 23:55:51.196627 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:51.196655 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:51.196665 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:51.196674 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:51.201667 1525568 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1127 23:55:51.201689 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:51.201698 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:51.201704 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:51.201711 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:51.201717 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:51.201724 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:51 GMT
	I1127 23:55:51.201730 1525568 round_trippers.go:580]     Audit-Id: dabd074b-6bf6-46f4-b4fe-635e28e39ddc
	I1127 23:55:51.201834 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:51.696705 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:51.698627 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:51.698643 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:51.698652 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:51.701108 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:51.701136 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:51.701145 1525568 round_trippers.go:580]     Audit-Id: 0b9be5c7-1741-48f8-8b71-bec7011b51ac
	I1127 23:55:51.701152 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:51.701158 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:51.701164 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:51.701171 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:51.701180 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:51 GMT
	I1127 23:55:51.701290 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:52.196149 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:52.196175 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:52.196185 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:52.196192 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:52.198650 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:52.198670 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:52.198679 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:52.198685 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:52.198693 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:52 GMT
	I1127 23:55:52.198699 1525568 round_trippers.go:580]     Audit-Id: ae51d754-dcc7-4804-98b5-96fd0e4cfc92
	I1127 23:55:52.198705 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:52.198711 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:52.198856 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:52.695962 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:52.695989 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:52.695998 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:52.696006 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:52.698543 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:52.698567 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:52.698575 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:52 GMT
	I1127 23:55:52.698583 1525568 round_trippers.go:580]     Audit-Id: b7b23655-c597-4717-b651-35a811c11eef
	I1127 23:55:52.698589 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:52.698595 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:52.698601 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:52.698607 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:52.699035 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:53.196735 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:53.196756 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:53.196765 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:53.196773 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:53.199196 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:53.199224 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:53.199233 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:53.199241 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:53.199248 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:53 GMT
	I1127 23:55:53.199254 1525568 round_trippers.go:580]     Audit-Id: 5d81d48f-40d7-4633-8a46-cdd37fa77232
	I1127 23:55:53.199260 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:53.199267 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:53.199393 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:53.199762 1525568 node_ready.go:58] node "multinode-784312-m02" has status "Ready":"False"
	I1127 23:55:53.696257 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:53.696278 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:53.696289 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:53.696296 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:53.698717 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:53.698743 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:53.698752 1525568 round_trippers.go:580]     Audit-Id: f9f76607-018d-4324-867e-e4b7bf37ecb8
	I1127 23:55:53.698759 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:53.698765 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:53.698772 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:53.698778 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:53.698785 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:53 GMT
	I1127 23:55:53.698902 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:54.195927 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:54.195951 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:54.195962 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:54.195969 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:54.198456 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:54.198479 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:54.198487 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:54.198494 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:54 GMT
	I1127 23:55:54.198500 1525568 round_trippers.go:580]     Audit-Id: fe4d13d7-a8e3-4f2d-a508-3aad382c42ce
	I1127 23:55:54.198507 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:54.198513 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:54.198520 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:54.199654 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:54.696627 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:54.696650 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:54.696660 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:54.696667 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:54.699071 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:54.699091 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:54.699100 1525568 round_trippers.go:580]     Audit-Id: b86066fa-d57b-4941-bd5c-28e270449048
	I1127 23:55:54.699106 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:54.699113 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:54.699119 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:54.699125 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:54.699131 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:54 GMT
	I1127 23:55:54.699225 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:55.196183 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:55.196210 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:55.196220 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:55.196228 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:55.198711 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:55.198737 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:55.198745 1525568 round_trippers.go:580]     Audit-Id: d946c542-ae9b-4d23-9464-34ff99286e9a
	I1127 23:55:55.198753 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:55.198759 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:55.198765 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:55.198772 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:55.198782 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:55 GMT
	I1127 23:55:55.198893 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:55.695947 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:55.695973 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:55.695984 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:55.695991 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:55.698487 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:55.698510 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:55.698519 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:55.698526 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:55.698532 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:55 GMT
	I1127 23:55:55.698538 1525568 round_trippers.go:580]     Audit-Id: 0a8b30b8-ef74-4e4c-95c9-d0dfb97fd542
	I1127 23:55:55.698548 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:55.698558 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:55.698852 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:55.699236 1525568 node_ready.go:58] node "multinode-784312-m02" has status "Ready":"False"
	I1127 23:55:56.195929 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:56.195955 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:56.195966 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:56.195973 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:56.198429 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:56.198456 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:56.198465 1525568 round_trippers.go:580]     Audit-Id: 750a16f3-27cc-4da4-80e7-ac63c0cdb834
	I1127 23:55:56.198472 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:56.198478 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:56.198484 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:56.198490 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:56.198496 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:56 GMT
	I1127 23:55:56.198606 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:56.696784 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:56.699263 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:56.699289 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:56.699309 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:56.702233 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:56.702256 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:56.702266 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:56.702272 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:56.702279 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:56.702285 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:56.702292 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:56 GMT
	I1127 23:55:56.702304 1525568 round_trippers.go:580]     Audit-Id: caf96fea-35c0-4921-9e39-30208bc76009
	I1127 23:55:56.702788 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:57.196511 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:57.196538 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:57.196549 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:57.196557 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:57.199051 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:57.199076 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:57.199084 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:57.199091 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:57.199100 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:57 GMT
	I1127 23:55:57.199106 1525568 round_trippers.go:580]     Audit-Id: f4e4d7ff-3f1b-40e7-9421-da40681ce4f3
	I1127 23:55:57.199114 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:57.199121 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:57.199299 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:57.695973 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:57.696004 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:57.696015 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:57.696022 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:57.698616 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:57.698644 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:57.698654 1525568 round_trippers.go:580]     Audit-Id: cf718dfd-297b-4c86-aa3f-b92b23f1b115
	I1127 23:55:57.698661 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:57.698668 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:57.698674 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:57.698680 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:57.698687 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:57 GMT
	I1127 23:55:57.698948 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:57.699333 1525568 node_ready.go:58] node "multinode-784312-m02" has status "Ready":"False"
	I1127 23:55:58.196643 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:58.196667 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:58.196677 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:58.196684 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:58.199243 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:58.199263 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:58.199273 1525568 round_trippers.go:580]     Audit-Id: 97df0e70-0146-493e-8eb7-69dde29059c3
	I1127 23:55:58.199279 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:58.199286 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:58.199292 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:58.199298 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:58.199304 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:58 GMT
	I1127 23:55:58.199434 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"484","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1127 23:55:58.696535 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:58.696557 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:58.696566 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:58.696573 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:58.699089 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:58.699118 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:58.699128 1525568 round_trippers.go:580]     Audit-Id: fd16b9fa-d524-4117-a344-80cacc12e262
	I1127 23:55:58.699135 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:58.699141 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:58.699147 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:58.699154 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:58.699165 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:58 GMT
	I1127 23:55:58.699305 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"507","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5378 chars]
	I1127 23:55:58.699683 1525568 node_ready.go:49] node "multinode-784312-m02" has status "Ready":"True"
	I1127 23:55:58.699699 1525568 node_ready.go:38] duration metric: took 31.014291749s waiting for node "multinode-784312-m02" to be "Ready" ...
	I1127 23:55:58.699711 1525568 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:55:58.699773 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1127 23:55:58.699785 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:58.699793 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:58.699800 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:58.703226 1525568 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:58.703249 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:58.703258 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:58.703265 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:58.703271 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:58.703278 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:58.703284 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:58 GMT
	I1127 23:55:58.703291 1525568 round_trippers.go:580]     Audit-Id: b371d3e9-1366-4b18-93f4-425886d9754d
	I1127 23:55:58.704024 1525568 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"507"},"items":[{"metadata":{"name":"coredns-5dd5756b68-n6fjh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"bd970bc6-edbd-4f25-830d-54a301351a7e","resourceVersion":"418","creationTimestamp":"2023-11-27T23:54:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"97d0e2f6-2e20-4f2b-8aca-acc015d0a1b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97d0e2f6-2e20-4f2b-8aca-acc015d0a1b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I1127 23:55:58.706979 1525568 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-n6fjh" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:58.707076 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-n6fjh
	I1127 23:55:58.707087 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:58.707096 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:58.707103 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:58.709507 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:58.709536 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:58.709545 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:58.709552 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:58.709558 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:58.709564 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:58.709572 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:58 GMT
	I1127 23:55:58.709585 1525568 round_trippers.go:580]     Audit-Id: 8fad606e-85e7-49a0-9499-8d31757da09a
	I1127 23:55:58.709688 1525568 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-n6fjh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"bd970bc6-edbd-4f25-830d-54a301351a7e","resourceVersion":"418","creationTimestamp":"2023-11-27T23:54:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"97d0e2f6-2e20-4f2b-8aca-acc015d0a1b5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97d0e2f6-2e20-4f2b-8aca-acc015d0a1b5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1127 23:55:58.710214 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:58.710232 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:58.710249 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:58.710258 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:58.712638 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:58.712663 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:58.712672 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:58.712679 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:58.712685 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:58.712693 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:58 GMT
	I1127 23:55:58.712704 1525568 round_trippers.go:580]     Audit-Id: d7f43b40-79fa-4ff5-b56b-c4378e05f97c
	I1127 23:55:58.712710 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:58.712830 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"402","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1127 23:55:58.713231 1525568 pod_ready.go:92] pod "coredns-5dd5756b68-n6fjh" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:58.713249 1525568 pod_ready.go:81] duration metric: took 6.238168ms waiting for pod "coredns-5dd5756b68-n6fjh" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:58.713261 1525568 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-784312" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:58.713326 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-784312
	I1127 23:55:58.713337 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:58.713344 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:58.713352 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:58.715668 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:58.715690 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:58.715698 1525568 round_trippers.go:580]     Audit-Id: 06ec917c-d240-43aa-bd0a-b8aa7783ff47
	I1127 23:55:58.715713 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:58.715720 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:58.715726 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:58.715734 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:58.715740 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:58 GMT
	I1127 23:55:58.715849 1525568 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-784312","namespace":"kube-system","uid":"8ccfe057-0978-4cae-8f60-f369839909b8","resourceVersion":"389","creationTimestamp":"2023-11-27T23:54:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"1ddfc2ed4e88470f665ac4b583b77f27","kubernetes.io/config.mirror":"1ddfc2ed4e88470f665ac4b583b77f27","kubernetes.io/config.seen":"2023-11-27T23:54:22.333071846Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1127 23:55:58.716297 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:58.716310 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:58.716318 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:58.716325 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:58.718500 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:58.718524 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:58.718533 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:58.718539 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:58 GMT
	I1127 23:55:58.718546 1525568 round_trippers.go:580]     Audit-Id: 54504114-48ed-4d0f-8043-ecbe64cbed5c
	I1127 23:55:58.718553 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:58.718560 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:58.718571 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:58.718688 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"402","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1127 23:55:58.719062 1525568 pod_ready.go:92] pod "etcd-multinode-784312" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:58.719081 1525568 pod_ready.go:81] duration metric: took 5.812531ms waiting for pod "etcd-multinode-784312" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:58.719098 1525568 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-784312" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:58.719151 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-784312
	I1127 23:55:58.719161 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:58.719168 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:58.719175 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:58.721784 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:58.721809 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:58.721818 1525568 round_trippers.go:580]     Audit-Id: 2c551e17-4883-4028-ae2d-e6005aab9d5a
	I1127 23:55:58.721825 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:58.721832 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:58.721839 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:58.721848 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:58.721876 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:58 GMT
	I1127 23:55:58.721986 1525568 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-784312","namespace":"kube-system","uid":"0782da70-b0b0-407b-a075-9c1ae5915c7f","resourceVersion":"390","creationTimestamp":"2023-11-27T23:54:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"e92132bda09962ee19f51deeb131df5e","kubernetes.io/config.mirror":"e92132bda09962ee19f51deeb131df5e","kubernetes.io/config.seen":"2023-11-27T23:54:14.284091555Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1127 23:55:58.722518 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:58.722534 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:58.722542 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:58.722550 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:58.724930 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:58.724993 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:58.725015 1525568 round_trippers.go:580]     Audit-Id: 0b57565e-716d-4f3a-8878-80b811f14a72
	I1127 23:55:58.725026 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:58.725033 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:58.725041 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:58.725047 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:58.725063 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:58 GMT
	I1127 23:55:58.725158 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"402","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1127 23:55:58.725533 1525568 pod_ready.go:92] pod "kube-apiserver-multinode-784312" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:58.725550 1525568 pod_ready.go:81] duration metric: took 6.442055ms waiting for pod "kube-apiserver-multinode-784312" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:58.725561 1525568 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-784312" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:58.725629 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-784312
	I1127 23:55:58.725639 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:58.725647 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:58.725654 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:58.728246 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:58.728271 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:58.728279 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:58.728286 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:58.728293 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:58 GMT
	I1127 23:55:58.728300 1525568 round_trippers.go:580]     Audit-Id: 8b5c28f2-a5b0-4057-bf73-8172b3414280
	I1127 23:55:58.728306 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:58.728312 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:58.728444 1525568 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-784312","namespace":"kube-system","uid":"50264ad1-dc74-4cf1-86e4-25bc27ed82ec","resourceVersion":"391","creationTimestamp":"2023-11-27T23:54:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7b0782ad15902781f6c2b81516f0f59a","kubernetes.io/config.mirror":"7b0782ad15902781f6c2b81516f0f59a","kubernetes.io/config.seen":"2023-11-27T23:54:22.333077746Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1127 23:55:58.728979 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:58.728993 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:58.729002 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:58.729013 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:58.731370 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:58.731393 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:58.731401 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:58 GMT
	I1127 23:55:58.731408 1525568 round_trippers.go:580]     Audit-Id: 202857ae-49cb-494f-a5fc-7516aac586cd
	I1127 23:55:58.731414 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:58.731420 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:58.731430 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:58.731437 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:58.731806 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"402","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1127 23:55:58.732193 1525568 pod_ready.go:92] pod "kube-controller-manager-multinode-784312" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:58.732211 1525568 pod_ready.go:81] duration metric: took 6.6391ms waiting for pod "kube-controller-manager-multinode-784312" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:58.732223 1525568 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7vspj" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:58.897586 1525568 request.go:629] Waited for 165.298953ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7vspj
	I1127 23:55:58.897688 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7vspj
	I1127 23:55:58.897701 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:58.897710 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:58.897717 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:58.900200 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:58.900269 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:58.900300 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:58 GMT
	I1127 23:55:58.900321 1525568 round_trippers.go:580]     Audit-Id: c857197f-b748-4af4-a13d-a333e0968f87
	I1127 23:55:58.900340 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:58.900357 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:58.900379 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:58.900386 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:58.900502 1525568 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7vspj","generateName":"kube-proxy-","namespace":"kube-system","uid":"eeecedf5-ddd9-4647-b567-36b194cb229b","resourceVersion":"385","creationTimestamp":"2023-11-27T23:54:36Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a6ed3fcf-f10c-4b5e-a68c-dc005d2513e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6ed3fcf-f10c-4b5e-a68c-dc005d2513e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1127 23:55:59.097350 1525568 request.go:629] Waited for 196.324364ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:59.097416 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:59.097422 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:59.097431 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:59.097443 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:59.100099 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:59.100128 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:59.100135 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:59 GMT
	I1127 23:55:59.100142 1525568 round_trippers.go:580]     Audit-Id: d40c98ea-65be-4752-b8df-2c15384c1a45
	I1127 23:55:59.100149 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:59.100155 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:59.100161 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:59.100168 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:59.100270 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"402","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1127 23:55:59.100655 1525568 pod_ready.go:92] pod "kube-proxy-7vspj" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:59.100672 1525568 pod_ready.go:81] duration metric: took 368.439294ms waiting for pod "kube-proxy-7vspj" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:59.100684 1525568 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xl6nm" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:59.297052 1525568 request.go:629] Waited for 196.302736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xl6nm
	I1127 23:55:59.297155 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xl6nm
	I1127 23:55:59.297181 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:59.297195 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:59.297203 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:59.299789 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:59.299815 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:59.299829 1525568 round_trippers.go:580]     Audit-Id: 06ff394c-a3e6-40e8-87d4-6a91fb616ff3
	I1127 23:55:59.299837 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:59.299843 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:59.299849 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:59.299856 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:59.299866 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:59 GMT
	I1127 23:55:59.299972 1525568 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xl6nm","generateName":"kube-proxy-","namespace":"kube-system","uid":"3dc6bd84-38dc-49eb-9af5-9d0f10bc25ec","resourceVersion":"473","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a6ed3fcf-f10c-4b5e-a68c-dc005d2513e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6ed3fcf-f10c-4b5e-a68c-dc005d2513e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1127 23:55:59.496623 1525568 request.go:629] Waited for 196.178568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:59.496687 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312-m02
	I1127 23:55:59.496693 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:59.496703 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:59.496715 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:59.499600 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:59.499667 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:59.499689 1525568 round_trippers.go:580]     Audit-Id: 582484c1-5aba-400a-b7eb-8cc56863d6f9
	I1127 23:55:59.499708 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:59.499741 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:59.499763 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:59.499775 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:59.499782 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:59 GMT
	I1127 23:55:59.499902 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312-m02","uid":"51da0d2c-5867-4d61-a2e2-21c69d88c1c1","resourceVersion":"507","creationTimestamp":"2023-11-27T23:55:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5378 chars]
	I1127 23:55:59.500274 1525568 pod_ready.go:92] pod "kube-proxy-xl6nm" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:59.500293 1525568 pod_ready.go:81] duration metric: took 399.599432ms waiting for pod "kube-proxy-xl6nm" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:59.500305 1525568 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-784312" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:59.696604 1525568 request.go:629] Waited for 196.235939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-784312
	I1127 23:55:59.696666 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-784312
	I1127 23:55:59.696676 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:59.696685 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:59.696697 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:59.699192 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:59.699275 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:59.699386 1525568 round_trippers.go:580]     Audit-Id: edc87279-e1c9-419f-8736-ea9fc8c1191a
	I1127 23:55:59.699401 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:59.699408 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:59.699417 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:59.699432 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:59.699446 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:59 GMT
	I1127 23:55:59.699577 1525568 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-784312","namespace":"kube-system","uid":"540bbd67-2910-425c-999b-69f4ec74bc2c","resourceVersion":"392","creationTimestamp":"2023-11-27T23:54:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ab5d51a61ac97955ebf99588ca9d0290","kubernetes.io/config.mirror":"ab5d51a61ac97955ebf99588ca9d0290","kubernetes.io/config.seen":"2023-11-27T23:54:22.333078697Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1127 23:55:59.897327 1525568 request.go:629] Waited for 197.315526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:59.897411 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-784312
	I1127 23:55:59.897423 1525568 round_trippers.go:469] Request Headers:
	I1127 23:55:59.897432 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:59.897439 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:55:59.899947 1525568 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:59.899987 1525568 round_trippers.go:577] Response Headers:
	I1127 23:55:59.899995 1525568 round_trippers.go:580]     Audit-Id: df59719a-4b02-46ff-a986-d029225b4ac3
	I1127 23:55:59.900002 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:59.900009 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:59.900015 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:55:59.900021 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:55:59.900034 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:59 GMT
	I1127 23:55:59.900140 1525568 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"402","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:19Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1127 23:55:59.900526 1525568 pod_ready.go:92] pod "kube-scheduler-multinode-784312" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:59.900544 1525568 pod_ready.go:81] duration metric: took 400.231491ms waiting for pod "kube-scheduler-multinode-784312" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:59.900556 1525568 pod_ready.go:38] duration metric: took 1.20083418s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:55:59.900573 1525568 system_svc.go:44] waiting for kubelet service to be running ....
	I1127 23:55:59.900635 1525568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:55:59.914275 1525568 system_svc.go:56] duration metric: took 13.692882ms WaitForService to wait for kubelet.
	I1127 23:55:59.914342 1525568 kubeadm.go:581] duration metric: took 32.251394505s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1127 23:55:59.914366 1525568 node_conditions.go:102] verifying NodePressure condition ...
	I1127 23:56:00.096697 1525568 request.go:629] Waited for 182.198116ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1127 23:56:00.096776 1525568 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1127 23:56:00.096783 1525568 round_trippers.go:469] Request Headers:
	I1127 23:56:00.096793 1525568 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:56:00.096807 1525568 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1127 23:56:00.100483 1525568 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:56:00.100506 1525568 round_trippers.go:577] Response Headers:
	I1127 23:56:00.100515 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ef08c5e-2262-449a-8b3b-5a816f8a0487
	I1127 23:56:00.100522 1525568 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:56:00 GMT
	I1127 23:56:00.100529 1525568 round_trippers.go:580]     Audit-Id: e97e898f-fc51-4afe-b624-c4855ee5e50d
	I1127 23:56:00.100536 1525568 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:56:00.100542 1525568 round_trippers.go:580]     Content-Type: application/json
	I1127 23:56:00.100549 1525568 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 319b43a7-5ec8-4ef8-8721-c1f74315c4f2
	I1127 23:56:00.100707 1525568 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"508"},"items":[{"metadata":{"name":"multinode-784312","uid":"9179679b-49e7-4573-ae82-bca6f8470046","resourceVersion":"402","creationTimestamp":"2023-11-27T23:54:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-784312","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-784312","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_23_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12452 chars]
	I1127 23:56:00.115733 1525568 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1127 23:56:00.115781 1525568 node_conditions.go:123] node cpu capacity is 2
	I1127 23:56:00.115794 1525568 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1127 23:56:00.115800 1525568 node_conditions.go:123] node cpu capacity is 2
	I1127 23:56:00.115805 1525568 node_conditions.go:105] duration metric: took 201.433731ms to run NodePressure ...
	I1127 23:56:00.115818 1525568 start.go:228] waiting for startup goroutines ...
	I1127 23:56:00.115861 1525568 start.go:242] writing updated cluster config ...
	I1127 23:56:00.116293 1525568 ssh_runner.go:195] Run: rm -f paused
	I1127 23:56:00.361531 1525568 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1127 23:56:00.364385 1525568 out.go:177] * Done! kubectl is now configured to use "multinode-784312" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 27 23:55:08 multinode-784312 crio[900]: time="2023-11-27 23:55:08.658159467Z" level=info msg="Starting container: f214499038f4a7a34749bbe194a19dca200018c29fbafc0bc57a75b6fbd2095e" id=5364bbdb-b7d4-4f28-90fd-f9bbc1d3d451 name=/runtime.v1.RuntimeService/StartContainer
	Nov 27 23:55:08 multinode-784312 crio[900]: time="2023-11-27 23:55:08.661344249Z" level=info msg="Created container 03a6e690e57fb24075642860a1e1cdb964e36d869b23e39b5408271b787c9a4b: kube-system/storage-provisioner/storage-provisioner" id=92df0e9d-2a4f-4bab-935c-f7c1b7d28a0d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 27 23:55:08 multinode-784312 crio[900]: time="2023-11-27 23:55:08.662011024Z" level=info msg="Starting container: 03a6e690e57fb24075642860a1e1cdb964e36d869b23e39b5408271b787c9a4b" id=3302a5fc-2669-4679-8957-0d2e07e0649e name=/runtime.v1.RuntimeService/StartContainer
	Nov 27 23:55:08 multinode-784312 crio[900]: time="2023-11-27 23:55:08.670527997Z" level=info msg="Started container" PID=1967 containerID=f214499038f4a7a34749bbe194a19dca200018c29fbafc0bc57a75b6fbd2095e description=kube-system/coredns-5dd5756b68-n6fjh/coredns id=5364bbdb-b7d4-4f28-90fd-f9bbc1d3d451 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c718394f9d17a84fbec089d4b97350a9873203866c61e0870050da77e9df12b8
	Nov 27 23:55:08 multinode-784312 crio[900]: time="2023-11-27 23:55:08.683162855Z" level=info msg="Started container" PID=1953 containerID=03a6e690e57fb24075642860a1e1cdb964e36d869b23e39b5408271b787c9a4b description=kube-system/storage-provisioner/storage-provisioner id=3302a5fc-2669-4679-8957-0d2e07e0649e name=/runtime.v1.RuntimeService/StartContainer sandboxID=eae366eea2aaa0b248f3613ba912be932e61d0deee82ccdaf7c97c5a671428b2
	Nov 27 23:56:01 multinode-784312 crio[900]: time="2023-11-27 23:56:01.676990600Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-cls7b/POD" id=73dca18b-1bd6-4f22-bd9a-3426a300cace name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 27 23:56:01 multinode-784312 crio[900]: time="2023-11-27 23:56:01.677046305Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 27 23:56:01 multinode-784312 crio[900]: time="2023-11-27 23:56:01.697441318Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-cls7b Namespace:default ID:4f04f71919a2a382169a50c4a91bd8dcdd42c37d9757e210b2373d0555792398 UID:d946f7b7-263f-4e23-9a59-631b856fde43 NetNS:/var/run/netns/ec4c1f7b-c902-439b-bcbb-0d90fc57457f Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 27 23:56:01 multinode-784312 crio[900]: time="2023-11-27 23:56:01.697624406Z" level=info msg="Adding pod default_busybox-5bc68d56bd-cls7b to CNI network \"kindnet\" (type=ptp)"
	Nov 27 23:56:01 multinode-784312 crio[900]: time="2023-11-27 23:56:01.707470630Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-cls7b Namespace:default ID:4f04f71919a2a382169a50c4a91bd8dcdd42c37d9757e210b2373d0555792398 UID:d946f7b7-263f-4e23-9a59-631b856fde43 NetNS:/var/run/netns/ec4c1f7b-c902-439b-bcbb-0d90fc57457f Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 27 23:56:01 multinode-784312 crio[900]: time="2023-11-27 23:56:01.707619117Z" level=info msg="Checking pod default_busybox-5bc68d56bd-cls7b for CNI network kindnet (type=ptp)"
	Nov 27 23:56:01 multinode-784312 crio[900]: time="2023-11-27 23:56:01.728545317Z" level=info msg="Ran pod sandbox 4f04f71919a2a382169a50c4a91bd8dcdd42c37d9757e210b2373d0555792398 with infra container: default/busybox-5bc68d56bd-cls7b/POD" id=73dca18b-1bd6-4f22-bd9a-3426a300cace name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 27 23:56:01 multinode-784312 crio[900]: time="2023-11-27 23:56:01.730077413Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=4e784ec7-d91d-4b9c-bf6a-05c7cc858952 name=/runtime.v1.ImageService/ImageStatus
	Nov 27 23:56:01 multinode-784312 crio[900]: time="2023-11-27 23:56:01.730296857Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=4e784ec7-d91d-4b9c-bf6a-05c7cc858952 name=/runtime.v1.ImageService/ImageStatus
	Nov 27 23:56:01 multinode-784312 crio[900]: time="2023-11-27 23:56:01.731178695Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=7455bc34-1d30-432a-ad87-3f7ab2ea0883 name=/runtime.v1.ImageService/PullImage
	Nov 27 23:56:01 multinode-784312 crio[900]: time="2023-11-27 23:56:01.732467085Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Nov 27 23:56:02 multinode-784312 crio[900]: time="2023-11-27 23:56:02.377984443Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Nov 27 23:56:03 multinode-784312 crio[900]: time="2023-11-27 23:56:03.653424921Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=7455bc34-1d30-432a-ad87-3f7ab2ea0883 name=/runtime.v1.ImageService/PullImage
	Nov 27 23:56:03 multinode-784312 crio[900]: time="2023-11-27 23:56:03.656003900Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=9b895441-1391-4c30-b3ff-6799488c6670 name=/runtime.v1.ImageService/ImageStatus
	Nov 27 23:56:03 multinode-784312 crio[900]: time="2023-11-27 23:56:03.657302086Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=9b895441-1391-4c30-b3ff-6799488c6670 name=/runtime.v1.ImageService/ImageStatus
	Nov 27 23:56:03 multinode-784312 crio[900]: time="2023-11-27 23:56:03.658724694Z" level=info msg="Creating container: default/busybox-5bc68d56bd-cls7b/busybox" id=6b968d72-53e3-45f0-8074-b50f39f09c3f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 27 23:56:03 multinode-784312 crio[900]: time="2023-11-27 23:56:03.658821038Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 27 23:56:03 multinode-784312 crio[900]: time="2023-11-27 23:56:03.729658402Z" level=info msg="Created container 942db93b9994662c11a7a109c7d1474d75e1f15402143766a28611d3a7914ea2: default/busybox-5bc68d56bd-cls7b/busybox" id=6b968d72-53e3-45f0-8074-b50f39f09c3f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 27 23:56:03 multinode-784312 crio[900]: time="2023-11-27 23:56:03.732345022Z" level=info msg="Starting container: 942db93b9994662c11a7a109c7d1474d75e1f15402143766a28611d3a7914ea2" id=dcb4f476-5802-4194-adff-c3e431d36f68 name=/runtime.v1.RuntimeService/StartContainer
	Nov 27 23:56:03 multinode-784312 crio[900]: time="2023-11-27 23:56:03.742924622Z" level=info msg="Started container" PID=2103 containerID=942db93b9994662c11a7a109c7d1474d75e1f15402143766a28611d3a7914ea2 description=default/busybox-5bc68d56bd-cls7b/busybox id=dcb4f476-5802-4194-adff-c3e431d36f68 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4f04f71919a2a382169a50c4a91bd8dcdd42c37d9757e210b2373d0555792398
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	942db93b99946       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago        Running             busybox                   0                   4f04f71919a2a       busybox-5bc68d56bd-cls7b
	f214499038f4a       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      About a minute ago   Running             coredns                   0                   c718394f9d17a       coredns-5dd5756b68-n6fjh
	03a6e690e57fb       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      About a minute ago   Running             storage-provisioner       0                   eae366eea2aaa       storage-provisioner
	02901bcc28306       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                      About a minute ago   Running             kube-proxy                0                   7f24e55dfded1       kube-proxy-7vspj
	1d778594299c0       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                      About a minute ago   Running             kindnet-cni               0                   96a41d672e0ae       kindnet-hwrdz
	d3a630e6b119e       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                      About a minute ago   Running             kube-scheduler            0                   d87372624e91c       kube-scheduler-multinode-784312
	63f1099897213       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                      About a minute ago   Running             kube-apiserver            0                   d0de4d2df2ff3       kube-apiserver-multinode-784312
	070c07d5ad594       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   6277bb33265b8       etcd-multinode-784312
	986e2e14e661e       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                      About a minute ago   Running             kube-controller-manager   0                   1eb1eb7240fdb       kube-controller-manager-multinode-784312
	
	* 
	* ==> coredns [f214499038f4a7a34749bbe194a19dca200018c29fbafc0bc57a75b6fbd2095e] <==
	* [INFO] 10.244.0.3:58353 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122903s
	[INFO] 10.244.1.2:57211 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161066s
	[INFO] 10.244.1.2:34829 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001106877s
	[INFO] 10.244.1.2:45232 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109628s
	[INFO] 10.244.1.2:37467 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062448s
	[INFO] 10.244.1.2:55860 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000951548s
	[INFO] 10.244.1.2:44891 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078859s
	[INFO] 10.244.1.2:42217 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074411s
	[INFO] 10.244.1.2:57666 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073608s
	[INFO] 10.244.0.3:51736 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126522s
	[INFO] 10.244.0.3:55505 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000058962s
	[INFO] 10.244.0.3:43328 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074707s
	[INFO] 10.244.0.3:49571 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067051s
	[INFO] 10.244.1.2:60446 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106691s
	[INFO] 10.244.1.2:37280 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006345s
	[INFO] 10.244.1.2:54063 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000058699s
	[INFO] 10.244.1.2:47184 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067602s
	[INFO] 10.244.0.3:60148 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118948s
	[INFO] 10.244.0.3:56342 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000163133s
	[INFO] 10.244.0.3:33601 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113976s
	[INFO] 10.244.0.3:52293 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108914s
	[INFO] 10.244.1.2:35420 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107281s
	[INFO] 10.244.1.2:55000 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000127391s
	[INFO] 10.244.1.2:43957 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000065517s
	[INFO] 10.244.1.2:45134 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00012612s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-784312
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-784312
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45
	                    minikube.k8s.io/name=multinode-784312
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_27T23_54_23_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Nov 2023 23:54:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-784312
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Nov 2023 23:56:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Nov 2023 23:55:08 +0000   Mon, 27 Nov 2023 23:54:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Nov 2023 23:55:08 +0000   Mon, 27 Nov 2023 23:54:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Nov 2023 23:55:08 +0000   Mon, 27 Nov 2023 23:54:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Nov 2023 23:55:08 +0000   Mon, 27 Nov 2023 23:55:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-784312
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 16ae1d9d68eb487fb39dadb6d9e4f209
	  System UUID:                e51146d8-dddd-428b-bb03-c22cbcddca3c
	  Boot ID:                    eb10cf4d-5884-4052-85dd-9e7b7999f82d
	  Kernel Version:             5.15.0-1050-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-cls7b                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5dd5756b68-n6fjh                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     93s
	  kube-system                 etcd-multinode-784312                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         107s
	  kube-system                 kindnet-hwrdz                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      93s
	  kube-system                 kube-apiserver-multinode-784312             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kube-controller-manager-multinode-784312    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-proxy-7vspj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-scheduler-multinode-784312             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 91s   kube-proxy       
	  Normal  Starting                 107s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  107s  kubelet          Node multinode-784312 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s  kubelet          Node multinode-784312 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s  kubelet          Node multinode-784312 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           94s   node-controller  Node multinode-784312 event: Registered Node multinode-784312 in Controller
	  Normal  NodeReady                61s   kubelet          Node multinode-784312 status is now: NodeReady
	
	
	Name:               multinode-784312-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-784312-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Nov 2023 23:55:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-784312-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Nov 2023 23:56:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Nov 2023 23:55:58 +0000   Mon, 27 Nov 2023 23:55:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Nov 2023 23:55:58 +0000   Mon, 27 Nov 2023 23:55:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Nov 2023 23:55:58 +0000   Mon, 27 Nov 2023 23:55:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Nov 2023 23:55:58 +0000   Mon, 27 Nov 2023 23:55:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-784312-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d58f648cce446e78b43d532136d8b77
	  System UUID:                c9d40e1a-fdb9-4ee7-9b56-3abdd962436b
	  Boot ID:                    eb10cf4d-5884-4052-85dd-9e7b7999f82d
	  Kernel Version:             5.15.0-1050-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-dmvq4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-tv94c               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      43s
	  kube-system                 kube-proxy-xl6nm            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 41s                kube-proxy       
	  Normal  NodeHasSufficientMemory  43s (x5 over 44s)  kubelet          Node multinode-784312-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x5 over 44s)  kubelet          Node multinode-784312-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x5 over 44s)  kubelet          Node multinode-784312-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                node-controller  Node multinode-784312-m02 event: Registered Node multinode-784312-m02 in Controller
	  Normal  NodeReady                11s                kubelet          Node multinode-784312-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001173] FS-Cache: O-key=[8] '7bd7c90000000000'
	[  +0.000758] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001002] FS-Cache: N-cookie d=00000000fe658175{9p.inode} n=00000000124770db
	[  +0.001086] FS-Cache: N-key=[8] '7bd7c90000000000'
	[  +2.367044] FS-Cache: Duplicate cookie detected
	[  +0.000741] FS-Cache: O-cookie c=0000004d [p=0000004b fl=226 nc=0 na=1]
	[  +0.001026] FS-Cache: O-cookie d=00000000fe658175{9p.inode} n=0000000014c47df7
	[  +0.001148] FS-Cache: O-key=[8] '7ad7c90000000000'
	[  +0.000733] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001050] FS-Cache: N-cookie d=00000000fe658175{9p.inode} n=00000000ce1a5764
	[  +0.001129] FS-Cache: N-key=[8] '7ad7c90000000000'
	[  +0.423214] FS-Cache: Duplicate cookie detected
	[  +0.000771] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001066] FS-Cache: O-cookie d=00000000fe658175{9p.inode} n=00000000c7b9da3e
	[  +0.001094] FS-Cache: O-key=[8] '80d7c90000000000'
	[  +0.000747] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=00000000fe658175{9p.inode} n=00000000e29de338
	[  +0.001127] FS-Cache: N-key=[8] '80d7c90000000000'
	[  +4.315058] FS-Cache: Duplicate cookie detected
	[  +0.000758] FS-Cache: O-cookie c=0000005a [p=00000002 fl=222 nc=0 na=1]
	[  +0.001009] FS-Cache: O-cookie d=000000006e56f75c{9P.session} n=00000000db26fcaf
	[  +0.001116] FS-Cache: O-key=[10] '34333030363632333434'
	[  +0.000817] FS-Cache: N-cookie c=0000005b [p=00000002 fl=2 nc=0 na=1]
	[  +0.001032] FS-Cache: N-cookie d=000000006e56f75c{9P.session} n=00000000f4cdfc72
	[  +0.001142] FS-Cache: N-key=[10] '34333030363632333434'
	
	* 
	* ==> etcd [070c07d5ad594ed15b08e22f09020ae92ec7ca0053b0a9e85630cd8399d787ea] <==
	* {"level":"info","ts":"2023-11-27T23:54:15.204537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-11-27T23:54:15.20631Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-11-27T23:54:15.206008Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-27T23:54:15.206814Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-27T23:54:15.207156Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-27T23:54:15.206043Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-11-27T23:54:15.207333Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-11-27T23:54:15.669893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-27T23:54:15.669939Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-27T23:54:15.669961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-11-27T23:54:15.669973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-11-27T23:54:15.66998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-11-27T23:54:15.669989Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-11-27T23:54:15.669998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-11-27T23:54:15.677954Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-27T23:54:15.682067Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-784312 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-27T23:54:15.685921Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-27T23:54:15.685946Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-27T23:54:15.686027Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-27T23:54:15.687083Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-27T23:54:15.687088Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-11-27T23:54:15.686018Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-27T23:54:15.687591Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-27T23:54:15.722929Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-27T23:54:15.722976Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  23:56:09 up  6:38,  0 users,  load average: 1.39, 1.71, 1.95
	Linux multinode-784312 5.15.0-1050-aws #55~20.04.1-Ubuntu SMP Mon Nov 6 12:18:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [1d778594299c022a621a1593d4ef0e074b1f4d3833bb4fe50d26ca54800b1262] <==
	* I1127 23:55:07.982981       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1127 23:55:07.983014       1 main.go:227] handling current node
	I1127 23:55:18.001091       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1127 23:55:18.001218       1 main.go:227] handling current node
	I1127 23:55:28.014099       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1127 23:55:28.014219       1 main.go:227] handling current node
	I1127 23:55:28.014258       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1127 23:55:28.014288       1 main.go:250] Node multinode-784312-m02 has CIDR [10.244.1.0/24] 
	I1127 23:55:28.014477       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I1127 23:55:38.027485       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1127 23:55:38.027515       1 main.go:227] handling current node
	I1127 23:55:38.027526       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1127 23:55:38.027534       1 main.go:250] Node multinode-784312-m02 has CIDR [10.244.1.0/24] 
	I1127 23:55:48.038770       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1127 23:55:48.038804       1 main.go:227] handling current node
	I1127 23:55:48.038816       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1127 23:55:48.038822       1 main.go:250] Node multinode-784312-m02 has CIDR [10.244.1.0/24] 
	I1127 23:55:58.049781       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1127 23:55:58.049810       1 main.go:227] handling current node
	I1127 23:55:58.049822       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1127 23:55:58.049827       1 main.go:250] Node multinode-784312-m02 has CIDR [10.244.1.0/24] 
	I1127 23:56:08.067249       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1127 23:56:08.067653       1 main.go:227] handling current node
	I1127 23:56:08.067728       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1127 23:56:08.067763       1 main.go:250] Node multinode-784312-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [63f1099897213983b5ce0a7bf940993d50b2f562ae391d142e087dc375141ece] <==
	* I1127 23:54:19.496553       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1127 23:54:19.496581       1 shared_informer.go:318] Caches are synced for configmaps
	I1127 23:54:19.497391       1 aggregator.go:166] initial CRD sync complete...
	I1127 23:54:19.497409       1 autoregister_controller.go:141] Starting autoregister controller
	I1127 23:54:19.497414       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1127 23:54:19.497427       1 cache.go:39] Caches are synced for autoregister controller
	I1127 23:54:19.506660       1 controller.go:624] quota admission added evaluator for: namespaces
	I1127 23:54:19.514332       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1127 23:54:19.543334       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1127 23:54:20.251066       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1127 23:54:20.256238       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1127 23:54:20.256267       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1127 23:54:20.764234       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1127 23:54:20.806217       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1127 23:54:20.937461       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1127 23:54:20.943340       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1127 23:54:20.944433       1 controller.go:624] quota admission added evaluator for: endpoints
	I1127 23:54:20.951938       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1127 23:54:21.462887       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1127 23:54:22.246343       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1127 23:54:22.261523       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1127 23:54:22.286395       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1127 23:54:36.179289       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1127 23:54:36.627768       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E1127 23:56:04.687030       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x4004b84d50), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x400c0b1180), ResponseWriter:(*httpsnoop.rw)(0x400c0b1180), Flusher:(*httpsnoop.rw)(0x400c0b1180), CloseNotifier:(*httpsnoop.rw)(0x400c0b1180), Pusher:(*httpsnoop.rw)(0x400c0b1180)}}, encoder:(*versioning.codec)(0x400bfb0640), memAllocator:(*runtime.Allocator)(0x400a141ab8)})
	
	* 
	* ==> kube-controller-manager [986e2e14e661e3392dfdc0bca1fa090f53d060cbe447ba624dc747c638ab7ae1] <==
	* I1127 23:54:37.133388       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="33.308276ms"
	I1127 23:54:37.133476       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="54.342µs"
	I1127 23:55:08.189985       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="152.475µs"
	I1127 23:55:08.203958       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.403µs"
	I1127 23:55:09.597458       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.999688ms"
	I1127 23:55:09.598021       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="268.026µs"
	I1127 23:55:10.825112       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1127 23:55:26.635612       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-784312-m02\" does not exist"
	I1127 23:55:26.650991       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-784312-m02" podCIDRs=["10.244.1.0/24"]
	I1127 23:55:26.659206       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tv94c"
	I1127 23:55:26.659307       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xl6nm"
	I1127 23:55:30.826792       1 event.go:307] "Event occurred" object="multinode-784312-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-784312-m02 event: Registered Node multinode-784312-m02 in Controller"
	I1127 23:55:30.826824       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-784312-m02"
	I1127 23:55:58.286309       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-784312-m02"
	I1127 23:56:01.290224       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1127 23:56:01.315252       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-dmvq4"
	I1127 23:56:01.324596       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-cls7b"
	I1127 23:56:01.350492       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="59.858465ms"
	I1127 23:56:01.448834       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="97.863354ms"
	I1127 23:56:01.472080       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.823667ms"
	I1127 23:56:01.472294       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="53.555µs"
	I1127 23:56:04.315677       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.210553ms"
	I1127 23:56:04.315997       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="42.026µs"
	I1127 23:56:04.679297       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.473262ms"
	I1127 23:56:04.679991       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="56.541µs"
	
	* 
	* ==> kube-proxy [02901bcc2830684315080ccc2c1a3140b69a394e32dbfc28e13942597cc461d1] <==
	* I1127 23:54:37.812226       1 server_others.go:69] "Using iptables proxy"
	I1127 23:54:37.839453       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1127 23:54:37.888511       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1127 23:54:37.891098       1 server_others.go:152] "Using iptables Proxier"
	I1127 23:54:37.891141       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1127 23:54:37.891149       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1127 23:54:37.891221       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1127 23:54:37.891446       1 server.go:846] "Version info" version="v1.28.4"
	I1127 23:54:37.891463       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1127 23:54:37.895709       1 config.go:188] "Starting service config controller"
	I1127 23:54:37.895740       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1127 23:54:37.895765       1 config.go:97] "Starting endpoint slice config controller"
	I1127 23:54:37.895769       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1127 23:54:37.895809       1 config.go:315] "Starting node config controller"
	I1127 23:54:37.895816       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1127 23:54:37.998573       1 shared_informer.go:318] Caches are synced for node config
	I1127 23:54:38.005965       1 shared_informer.go:318] Caches are synced for service config
	I1127 23:54:38.006022       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [d3a630e6b119edce1edb3a1294aa7cb31b0b7ae4890b202f91fa9d256a4ba683] <==
	* W1127 23:54:19.496957       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1127 23:54:19.496964       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1127 23:54:19.497034       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1127 23:54:19.497050       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1127 23:54:19.497103       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1127 23:54:19.497117       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1127 23:54:19.497174       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1127 23:54:19.497188       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1127 23:54:19.497244       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1127 23:54:19.497260       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1127 23:54:19.497312       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1127 23:54:19.497327       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1127 23:54:20.314184       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1127 23:54:20.314221       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1127 23:54:20.344470       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1127 23:54:20.344578       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1127 23:54:20.368983       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1127 23:54:20.369182       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1127 23:54:20.476608       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1127 23:54:20.476729       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1127 23:54:20.489453       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1127 23:54:20.489546       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1127 23:54:20.512897       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1127 23:54:20.513005       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1127 23:54:20.857700       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 27 23:54:36 multinode-784312 kubelet[1392]: I1127 23:54:36.660138    1392 topology_manager.go:215] "Topology Admit Handler" podUID="068cf2a8-3b1a-431c-9cc5-2f290d6755cd" podNamespace="kube-system" podName="kindnet-hwrdz"
	Nov 27 23:54:36 multinode-784312 kubelet[1392]: I1127 23:54:36.758186    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eeecedf5-ddd9-4647-b567-36b194cb229b-lib-modules\") pod \"kube-proxy-7vspj\" (UID: \"eeecedf5-ddd9-4647-b567-36b194cb229b\") " pod="kube-system/kube-proxy-7vspj"
	Nov 27 23:54:36 multinode-784312 kubelet[1392]: I1127 23:54:36.758297    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/068cf2a8-3b1a-431c-9cc5-2f290d6755cd-xtables-lock\") pod \"kindnet-hwrdz\" (UID: \"068cf2a8-3b1a-431c-9cc5-2f290d6755cd\") " pod="kube-system/kindnet-hwrdz"
	Nov 27 23:54:36 multinode-784312 kubelet[1392]: I1127 23:54:36.758352    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/068cf2a8-3b1a-431c-9cc5-2f290d6755cd-lib-modules\") pod \"kindnet-hwrdz\" (UID: \"068cf2a8-3b1a-431c-9cc5-2f290d6755cd\") " pod="kube-system/kindnet-hwrdz"
	Nov 27 23:54:36 multinode-784312 kubelet[1392]: I1127 23:54:36.758409    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/068cf2a8-3b1a-431c-9cc5-2f290d6755cd-cni-cfg\") pod \"kindnet-hwrdz\" (UID: \"068cf2a8-3b1a-431c-9cc5-2f290d6755cd\") " pod="kube-system/kindnet-hwrdz"
	Nov 27 23:54:36 multinode-784312 kubelet[1392]: I1127 23:54:36.758441    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htwgj\" (UniqueName: \"kubernetes.io/projected/eeecedf5-ddd9-4647-b567-36b194cb229b-kube-api-access-htwgj\") pod \"kube-proxy-7vspj\" (UID: \"eeecedf5-ddd9-4647-b567-36b194cb229b\") " pod="kube-system/kube-proxy-7vspj"
	Nov 27 23:54:36 multinode-784312 kubelet[1392]: I1127 23:54:36.758468    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eeecedf5-ddd9-4647-b567-36b194cb229b-kube-proxy\") pod \"kube-proxy-7vspj\" (UID: \"eeecedf5-ddd9-4647-b567-36b194cb229b\") " pod="kube-system/kube-proxy-7vspj"
	Nov 27 23:54:36 multinode-784312 kubelet[1392]: I1127 23:54:36.758491    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eeecedf5-ddd9-4647-b567-36b194cb229b-xtables-lock\") pod \"kube-proxy-7vspj\" (UID: \"eeecedf5-ddd9-4647-b567-36b194cb229b\") " pod="kube-system/kube-proxy-7vspj"
	Nov 27 23:54:36 multinode-784312 kubelet[1392]: I1127 23:54:36.758514    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxvmr\" (UniqueName: \"kubernetes.io/projected/068cf2a8-3b1a-431c-9cc5-2f290d6755cd-kube-api-access-hxvmr\") pod \"kindnet-hwrdz\" (UID: \"068cf2a8-3b1a-431c-9cc5-2f290d6755cd\") " pod="kube-system/kindnet-hwrdz"
	Nov 27 23:54:37 multinode-784312 kubelet[1392]: W1127 23:54:37.303324    1392 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/6fd8b66557924c6a255350f8721ffd2110fc0027983ca6688ef0474215f93244/crio-7f24e55dfded144d8023da420fe48c02386167ade991c6db557152d1c95227d5 WatchSource:0}: Error finding container 7f24e55dfded144d8023da420fe48c02386167ade991c6db557152d1c95227d5: Status 404 returned error can't find the container with id 7f24e55dfded144d8023da420fe48c02386167ade991c6db557152d1c95227d5
	Nov 27 23:54:37 multinode-784312 kubelet[1392]: W1127 23:54:37.307212    1392 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/6fd8b66557924c6a255350f8721ffd2110fc0027983ca6688ef0474215f93244/crio-96a41d672e0ae424f3de898fffa9e0634e35a5dae7c23dec35b50f34944afef9 WatchSource:0}: Error finding container 96a41d672e0ae424f3de898fffa9e0634e35a5dae7c23dec35b50f34944afef9: Status 404 returned error can't find the container with id 96a41d672e0ae424f3de898fffa9e0634e35a5dae7c23dec35b50f34944afef9
	Nov 27 23:54:38 multinode-784312 kubelet[1392]: I1127 23:54:38.526829    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-hwrdz" podStartSLOduration=2.526784325 podCreationTimestamp="2023-11-27 23:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-27 23:54:38.510650719 +0000 UTC m=+16.287595289" watchObservedRunningTime="2023-11-27 23:54:38.526784325 +0000 UTC m=+16.303728887"
	Nov 27 23:54:42 multinode-784312 kubelet[1392]: I1127 23:54:42.367572    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-7vspj" podStartSLOduration=6.367527431 podCreationTimestamp="2023-11-27 23:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-27 23:54:38.527349849 +0000 UTC m=+16.304294412" watchObservedRunningTime="2023-11-27 23:54:42.367527431 +0000 UTC m=+20.144471993"
	Nov 27 23:55:08 multinode-784312 kubelet[1392]: I1127 23:55:08.156407    1392 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 27 23:55:08 multinode-784312 kubelet[1392]: I1127 23:55:08.186077    1392 topology_manager.go:215] "Topology Admit Handler" podUID="bd970bc6-edbd-4f25-830d-54a301351a7e" podNamespace="kube-system" podName="coredns-5dd5756b68-n6fjh"
	Nov 27 23:55:08 multinode-784312 kubelet[1392]: I1127 23:55:08.191202    1392 topology_manager.go:215] "Topology Admit Handler" podUID="712aa9f0-276e-458d-9783-9a05ee6dfb39" podNamespace="kube-system" podName="storage-provisioner"
	Nov 27 23:55:08 multinode-784312 kubelet[1392]: I1127 23:55:08.204655    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd970bc6-edbd-4f25-830d-54a301351a7e-config-volume\") pod \"coredns-5dd5756b68-n6fjh\" (UID: \"bd970bc6-edbd-4f25-830d-54a301351a7e\") " pod="kube-system/coredns-5dd5756b68-n6fjh"
	Nov 27 23:55:08 multinode-784312 kubelet[1392]: I1127 23:55:08.204714    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7spx9\" (UniqueName: \"kubernetes.io/projected/bd970bc6-edbd-4f25-830d-54a301351a7e-kube-api-access-7spx9\") pod \"coredns-5dd5756b68-n6fjh\" (UID: \"bd970bc6-edbd-4f25-830d-54a301351a7e\") " pod="kube-system/coredns-5dd5756b68-n6fjh"
	Nov 27 23:55:08 multinode-784312 kubelet[1392]: I1127 23:55:08.305971    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49ktj\" (UniqueName: \"kubernetes.io/projected/712aa9f0-276e-458d-9783-9a05ee6dfb39-kube-api-access-49ktj\") pod \"storage-provisioner\" (UID: \"712aa9f0-276e-458d-9783-9a05ee6dfb39\") " pod="kube-system/storage-provisioner"
	Nov 27 23:55:08 multinode-784312 kubelet[1392]: I1127 23:55:08.306026    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/712aa9f0-276e-458d-9783-9a05ee6dfb39-tmp\") pod \"storage-provisioner\" (UID: \"712aa9f0-276e-458d-9783-9a05ee6dfb39\") " pod="kube-system/storage-provisioner"
	Nov 27 23:55:09 multinode-784312 kubelet[1392]: I1127 23:55:09.584599    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.584555113 podCreationTimestamp="2023-11-27 23:54:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-27 23:55:09.572614772 +0000 UTC m=+47.349559325" watchObservedRunningTime="2023-11-27 23:55:09.584555113 +0000 UTC m=+47.361499675"
	Nov 27 23:56:01 multinode-784312 kubelet[1392]: I1127 23:56:01.374113    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-n6fjh" podStartSLOduration=85.374065012 podCreationTimestamp="2023-11-27 23:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-27 23:55:09.584909464 +0000 UTC m=+47.361854018" watchObservedRunningTime="2023-11-27 23:56:01.374065012 +0000 UTC m=+99.151009566"
	Nov 27 23:56:01 multinode-784312 kubelet[1392]: I1127 23:56:01.374336    1392 topology_manager.go:215] "Topology Admit Handler" podUID="d946f7b7-263f-4e23-9a59-631b856fde43" podNamespace="default" podName="busybox-5bc68d56bd-cls7b"
	Nov 27 23:56:01 multinode-784312 kubelet[1392]: I1127 23:56:01.480263    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfrm8\" (UniqueName: \"kubernetes.io/projected/d946f7b7-263f-4e23-9a59-631b856fde43-kube-api-access-rfrm8\") pod \"busybox-5bc68d56bd-cls7b\" (UID: \"d946f7b7-263f-4e23-9a59-631b856fde43\") " pod="default/busybox-5bc68d56bd-cls7b"
	Nov 27 23:56:01 multinode-784312 kubelet[1392]: W1127 23:56:01.727238    1392 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/6fd8b66557924c6a255350f8721ffd2110fc0027983ca6688ef0474215f93244/crio-4f04f71919a2a382169a50c4a91bd8dcdd42c37d9757e210b2373d0555792398 WatchSource:0}: Error finding container 4f04f71919a2a382169a50c4a91bd8dcdd42c37d9757e210b2373d0555792398: Status 404 returned error can't find the container with id 4f04f71919a2a382169a50c4a91bd8dcdd42c37d9757e210b2373d0555792398
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-784312 -n multinode-784312
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-784312 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.35s)

                                                
                                    
x
+
TestScheduledStopUnix (34.06s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-986183 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-986183 --memory=2048 --driver=docker  --container-runtime=crio: (29.201049405s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-986183 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-986183 -n scheduled-stop-986183
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-986183 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 1558065 running but should have been killed on reschedule of stop
panic.go:523: *** TestScheduledStopUnix FAILED at 2023-11-28 00:05:23.473156001 +0000 UTC m=+2134.691990202
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-986183
helpers_test.go:235: (dbg) docker inspect scheduled-stop-986183:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e4d758508b722b3f92a616ed69e221b3f8bfc395a062036897186ab60cb412d0",
	        "Created": "2023-11-28T00:04:59.29433545Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1556353,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-28T00:04:59.63175388Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6b2707af759835616662ab4511fa7cfc968ed5500b8f30a5d231d9af64582310",
	        "ResolvConfPath": "/var/lib/docker/containers/e4d758508b722b3f92a616ed69e221b3f8bfc395a062036897186ab60cb412d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e4d758508b722b3f92a616ed69e221b3f8bfc395a062036897186ab60cb412d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/e4d758508b722b3f92a616ed69e221b3f8bfc395a062036897186ab60cb412d0/hosts",
	        "LogPath": "/var/lib/docker/containers/e4d758508b722b3f92a616ed69e221b3f8bfc395a062036897186ab60cb412d0/e4d758508b722b3f92a616ed69e221b3f8bfc395a062036897186ab60cb412d0-json.log",
	        "Name": "/scheduled-stop-986183",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-986183:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-986183",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/19f59502fb715538483977e3168220bae7704dd02a16bb210f8148bb525fdf31-init/diff:/var/lib/docker/overlay2/66e18f6b92e8847ad9065a2bde54888b27c493e8cb472385d095e2aee2f57672/diff",
	                "MergedDir": "/var/lib/docker/overlay2/19f59502fb715538483977e3168220bae7704dd02a16bb210f8148bb525fdf31/merged",
	                "UpperDir": "/var/lib/docker/overlay2/19f59502fb715538483977e3168220bae7704dd02a16bb210f8148bb525fdf31/diff",
	                "WorkDir": "/var/lib/docker/overlay2/19f59502fb715538483977e3168220bae7704dd02a16bb210f8148bb525fdf31/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-986183",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-986183/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-986183",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-986183",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-986183",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "30ef0770b0be711ba7241e5412395ec213c84866703509019afb4e64376555f2",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34207"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34206"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34203"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34205"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34204"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/30ef0770b0be",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-986183": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e4d758508b72",
	                        "scheduled-stop-986183"
	                    ],
	                    "NetworkID": "463d101a6d5567464f7cf27d3467fc5135f78cd6431a08a74510e4ad662a1e84",
	                    "EndpointID": "a03eff72ed36b0603587ce3182a35063dd773dc4aa3fae7ce23798373d1c39fe",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-986183 -n scheduled-stop-986183
helpers_test.go:244: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p scheduled-stop-986183 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p scheduled-stop-986183 logs -n 25: (1.164009033s)
helpers_test.go:252: TestScheduledStopUnix logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| stop    | -p multinode-784312            | multinode-784312      | jenkins | v1.32.0 | 27 Nov 23 23:57 UTC | 27 Nov 23 23:57 UTC |
	| start   | -p multinode-784312            | multinode-784312      | jenkins | v1.32.0 | 27 Nov 23 23:57 UTC | 27 Nov 23 23:59 UTC |
	|         | --wait=true -v=8               |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	| node    | list -p multinode-784312       | multinode-784312      | jenkins | v1.32.0 | 27 Nov 23 23:59 UTC |                     |
	| node    | multinode-784312 node delete   | multinode-784312      | jenkins | v1.32.0 | 27 Nov 23 23:59 UTC | 27 Nov 23 23:59 UTC |
	|         | m03                            |                       |         |         |                     |                     |
	| stop    | multinode-784312 stop          | multinode-784312      | jenkins | v1.32.0 | 27 Nov 23 23:59 UTC | 27 Nov 23 23:59 UTC |
	| start   | -p multinode-784312            | multinode-784312      | jenkins | v1.32.0 | 27 Nov 23 23:59 UTC | 28 Nov 23 00:01 UTC |
	|         | --wait=true -v=8               |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	| node    | list -p multinode-784312       | multinode-784312      | jenkins | v1.32.0 | 28 Nov 23 00:01 UTC |                     |
	| start   | -p multinode-784312-m02        | multinode-784312-m02  | jenkins | v1.32.0 | 28 Nov 23 00:01 UTC |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	| start   | -p multinode-784312-m03        | multinode-784312-m03  | jenkins | v1.32.0 | 28 Nov 23 00:01 UTC | 28 Nov 23 00:01 UTC |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	| node    | add -p multinode-784312        | multinode-784312      | jenkins | v1.32.0 | 28 Nov 23 00:01 UTC |                     |
	| delete  | -p multinode-784312-m03        | multinode-784312-m03  | jenkins | v1.32.0 | 28 Nov 23 00:01 UTC | 28 Nov 23 00:01 UTC |
	| delete  | -p multinode-784312            | multinode-784312      | jenkins | v1.32.0 | 28 Nov 23 00:01 UTC | 28 Nov 23 00:01 UTC |
	| start   | -p test-preload-169927         | test-preload-169927   | jenkins | v1.32.0 | 28 Nov 23 00:01 UTC | 28 Nov 23 00:03 UTC |
	|         | --memory=2200                  |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	|         | --wait=true --preload=false    |                       |         |         |                     |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4   |                       |         |         |                     |                     |
	| image   | test-preload-169927 image pull | test-preload-169927   | jenkins | v1.32.0 | 28 Nov 23 00:03 UTC | 28 Nov 23 00:03 UTC |
	|         | gcr.io/k8s-minikube/busybox    |                       |         |         |                     |                     |
	| stop    | -p test-preload-169927         | test-preload-169927   | jenkins | v1.32.0 | 28 Nov 23 00:03 UTC | 28 Nov 23 00:03 UTC |
	| start   | -p test-preload-169927         | test-preload-169927   | jenkins | v1.32.0 | 28 Nov 23 00:03 UTC | 28 Nov 23 00:04 UTC |
	|         | --memory=2200                  |                       |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                       |         |         |                     |                     |
	|         | --wait=true --driver=docker    |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	| image   | test-preload-169927 image list | test-preload-169927   | jenkins | v1.32.0 | 28 Nov 23 00:04 UTC | 28 Nov 23 00:04 UTC |
	| delete  | -p test-preload-169927         | test-preload-169927   | jenkins | v1.32.0 | 28 Nov 23 00:04 UTC | 28 Nov 23 00:04 UTC |
	| start   | -p scheduled-stop-986183       | scheduled-stop-986183 | jenkins | v1.32.0 | 28 Nov 23 00:04 UTC | 28 Nov 23 00:05 UTC |
	|         | --memory=2048 --driver=docker  |                       |         |         |                     |                     |
	|         | --container-runtime=crio       |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-986183       | scheduled-stop-986183 | jenkins | v1.32.0 | 28 Nov 23 00:05 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-986183       | scheduled-stop-986183 | jenkins | v1.32.0 | 28 Nov 23 00:05 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-986183       | scheduled-stop-986183 | jenkins | v1.32.0 | 28 Nov 23 00:05 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-986183       | scheduled-stop-986183 | jenkins | v1.32.0 | 28 Nov 23 00:05 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-986183       | scheduled-stop-986183 | jenkins | v1.32.0 | 28 Nov 23 00:05 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-986183       | scheduled-stop-986183 | jenkins | v1.32.0 | 28 Nov 23 00:05 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 00:04:53
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 00:04:53.735930 1555893 out.go:296] Setting OutFile to fd 1 ...
	I1128 00:04:53.736094 1555893 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:04:53.736098 1555893 out.go:309] Setting ErrFile to fd 2...
	I1128 00:04:53.736103 1555893 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:04:53.736467 1555893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-1455288/.minikube/bin
	I1128 00:04:53.736969 1555893 out.go:303] Setting JSON to false
	I1128 00:04:53.738053 1555893 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24443,"bootTime":1701105451,"procs":277,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1128 00:04:53.738143 1555893 start.go:138] virtualization:  
	I1128 00:04:53.740512 1555893 out.go:177] * [scheduled-stop-986183] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1128 00:04:53.742267 1555893 out.go:177]   - MINIKUBE_LOCATION=17206
	I1128 00:04:53.744083 1555893 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 00:04:53.742422 1555893 notify.go:220] Checking for updates...
	I1128 00:04:53.747487 1555893 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-1455288/kubeconfig
	I1128 00:04:53.749455 1555893 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube
	I1128 00:04:53.751215 1555893 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1128 00:04:53.753013 1555893 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 00:04:53.754778 1555893 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 00:04:53.778850 1555893 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1128 00:04:53.778953 1555893 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 00:04:53.859639 1555893 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-28 00:04:53.848638725 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 00:04:53.859737 1555893 docker.go:295] overlay module found
	I1128 00:04:53.861568 1555893 out.go:177] * Using the docker driver based on user configuration
	I1128 00:04:53.863479 1555893 start.go:298] selected driver: docker
	I1128 00:04:53.863489 1555893 start.go:902] validating driver "docker" against <nil>
	I1128 00:04:53.863501 1555893 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 00:04:53.864149 1555893 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 00:04:53.937266 1555893 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-28 00:04:53.927623369 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 00:04:53.937414 1555893 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1128 00:04:53.937627 1555893 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1128 00:04:53.939470 1555893 out.go:177] * Using Docker driver with root privileges
	I1128 00:04:53.941166 1555893 cni.go:84] Creating CNI manager for ""
	I1128 00:04:53.941178 1555893 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1128 00:04:53.941193 1555893 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1128 00:04:53.941204 1555893 start_flags.go:323] config:
	{Name:scheduled-stop-986183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:scheduled-stop-986183 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:04:53.943336 1555893 out.go:177] * Starting control plane node scheduled-stop-986183 in cluster scheduled-stop-986183
	I1128 00:04:53.945121 1555893 cache.go:121] Beginning downloading kic base image for docker with crio
	I1128 00:04:53.946703 1555893 out.go:177] * Pulling base image ...
	I1128 00:04:53.948133 1555893 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 00:04:53.948177 1555893 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1128 00:04:53.948185 1555893 cache.go:56] Caching tarball of preloaded images
	I1128 00:04:53.948268 1555893 preload.go:174] Found /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1128 00:04:53.948277 1555893 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 00:04:53.948691 1555893 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/config.json ...
	I1128 00:04:53.948715 1555893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/config.json: {Name:mk327651c2d3602acc0f5f794939e49adcd6f875 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:04:53.948885 1555893 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1128 00:04:53.966043 1555893 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1128 00:04:53.966058 1555893 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1128 00:04:53.966078 1555893 cache.go:194] Successfully downloaded all kic artifacts
	I1128 00:04:53.966151 1555893 start.go:365] acquiring machines lock for scheduled-stop-986183: {Name:mka91edd7cf22455e39bbcd169fce88a2282c693 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:04:53.966271 1555893 start.go:369] acquired machines lock for "scheduled-stop-986183" in 102.858µs
	I1128 00:04:53.966296 1555893 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-986183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:scheduled-stop-986183 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 00:04:53.966368 1555893 start.go:125] createHost starting for "" (driver="docker")
	I1128 00:04:53.968674 1555893 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1128 00:04:53.968961 1555893 start.go:159] libmachine.API.Create for "scheduled-stop-986183" (driver="docker")
	I1128 00:04:53.968991 1555893 client.go:168] LocalClient.Create starting
	I1128 00:04:53.969090 1555893 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem
	I1128 00:04:53.969136 1555893 main.go:141] libmachine: Decoding PEM data...
	I1128 00:04:53.969151 1555893 main.go:141] libmachine: Parsing certificate...
	I1128 00:04:53.969224 1555893 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/cert.pem
	I1128 00:04:53.969245 1555893 main.go:141] libmachine: Decoding PEM data...
	I1128 00:04:53.969255 1555893 main.go:141] libmachine: Parsing certificate...
	I1128 00:04:53.969680 1555893 cli_runner.go:164] Run: docker network inspect scheduled-stop-986183 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1128 00:04:53.988491 1555893 cli_runner.go:211] docker network inspect scheduled-stop-986183 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1128 00:04:53.988556 1555893 network_create.go:281] running [docker network inspect scheduled-stop-986183] to gather additional debugging logs...
	I1128 00:04:53.988571 1555893 cli_runner.go:164] Run: docker network inspect scheduled-stop-986183
	W1128 00:04:54.011238 1555893 cli_runner.go:211] docker network inspect scheduled-stop-986183 returned with exit code 1
	I1128 00:04:54.011261 1555893 network_create.go:284] error running [docker network inspect scheduled-stop-986183]: docker network inspect scheduled-stop-986183: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network scheduled-stop-986183 not found
	I1128 00:04:54.011274 1555893 network_create.go:286] output of [docker network inspect scheduled-stop-986183]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network scheduled-stop-986183 not found
	
	** /stderr **
	I1128 00:04:54.011406 1555893 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1128 00:04:54.032324 1555893 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dd6178619d28 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d1:b7:12:be} reservation:<nil>}
	I1128 00:04:54.032588 1555893 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4a3aa0fb5c5a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:14:f0:39:a7} reservation:<nil>}
	I1128 00:04:54.032967 1555893 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024e9000}
	I1128 00:04:54.032989 1555893 network_create.go:124] attempt to create docker network scheduled-stop-986183 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1128 00:04:54.033047 1555893 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-986183 scheduled-stop-986183
	I1128 00:04:54.106220 1555893 network_create.go:108] docker network scheduled-stop-986183 192.168.67.0/24 created
	I1128 00:04:54.106243 1555893 kic.go:121] calculated static IP "192.168.67.2" for the "scheduled-stop-986183" container
	I1128 00:04:54.106324 1555893 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1128 00:04:54.123668 1555893 cli_runner.go:164] Run: docker volume create scheduled-stop-986183 --label name.minikube.sigs.k8s.io=scheduled-stop-986183 --label created_by.minikube.sigs.k8s.io=true
	I1128 00:04:54.142847 1555893 oci.go:103] Successfully created a docker volume scheduled-stop-986183
	I1128 00:04:54.142926 1555893 cli_runner.go:164] Run: docker run --rm --name scheduled-stop-986183-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-986183 --entrypoint /usr/bin/test -v scheduled-stop-986183:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1128 00:04:54.781403 1555893 oci.go:107] Successfully prepared a docker volume scheduled-stop-986183
	I1128 00:04:54.781437 1555893 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 00:04:54.781456 1555893 kic.go:194] Starting extracting preloaded images to volume ...
	I1128 00:04:54.781531 1555893 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-986183:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1128 00:04:59.204752 1555893 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-986183:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir: (4.423165504s)
	I1128 00:04:59.204774 1555893 kic.go:203] duration metric: took 4.423315 seconds to extract preloaded images to volume
	W1128 00:04:59.204929 1555893 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1128 00:04:59.205066 1555893 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1128 00:04:59.272561 1555893 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname scheduled-stop-986183 --name scheduled-stop-986183 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-986183 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=scheduled-stop-986183 --network scheduled-stop-986183 --ip 192.168.67.2 --volume scheduled-stop-986183:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1128 00:04:59.641719 1555893 cli_runner.go:164] Run: docker container inspect scheduled-stop-986183 --format={{.State.Running}}
	I1128 00:04:59.666108 1555893 cli_runner.go:164] Run: docker container inspect scheduled-stop-986183 --format={{.State.Status}}
	I1128 00:04:59.703081 1555893 cli_runner.go:164] Run: docker exec scheduled-stop-986183 stat /var/lib/dpkg/alternatives/iptables
	I1128 00:04:59.791897 1555893 oci.go:144] the created container "scheduled-stop-986183" has a running status.
	I1128 00:04:59.791914 1555893 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/scheduled-stop-986183/id_rsa...
	I1128 00:05:00.796095 1555893 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/scheduled-stop-986183/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1128 00:05:00.833450 1555893 cli_runner.go:164] Run: docker container inspect scheduled-stop-986183 --format={{.State.Status}}
	I1128 00:05:00.864402 1555893 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1128 00:05:00.864414 1555893 kic_runner.go:114] Args: [docker exec --privileged scheduled-stop-986183 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1128 00:05:00.957975 1555893 cli_runner.go:164] Run: docker container inspect scheduled-stop-986183 --format={{.State.Status}}
	I1128 00:05:00.994679 1555893 machine.go:88] provisioning docker machine ...
	I1128 00:05:00.994701 1555893 ubuntu.go:169] provisioning hostname "scheduled-stop-986183"
	I1128 00:05:00.994781 1555893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-986183
	I1128 00:05:01.019383 1555893 main.go:141] libmachine: Using SSH client type: native
	I1128 00:05:01.019818 1555893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1128 00:05:01.019829 1555893 main.go:141] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-986183 && echo "scheduled-stop-986183" | sudo tee /etc/hostname
	I1128 00:05:01.216518 1555893 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-986183
	
	I1128 00:05:01.216588 1555893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-986183
	I1128 00:05:01.244552 1555893 main.go:141] libmachine: Using SSH client type: native
	I1128 00:05:01.244960 1555893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1128 00:05:01.244976 1555893 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-986183' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-986183/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-986183' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:05:01.379879 1555893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:05:01.379896 1555893 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17206-1455288/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-1455288/.minikube}
	I1128 00:05:01.379924 1555893 ubuntu.go:177] setting up certificates
	I1128 00:05:01.379932 1555893 provision.go:83] configureAuth start
	I1128 00:05:01.380003 1555893 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-986183
	I1128 00:05:01.398751 1555893 provision.go:138] copyHostCerts
	I1128 00:05:01.398816 1555893 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.pem, removing ...
	I1128 00:05:01.398824 1555893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.pem
	I1128 00:05:01.398902 1555893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.pem (1078 bytes)
	I1128 00:05:01.398999 1555893 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-1455288/.minikube/cert.pem, removing ...
	I1128 00:05:01.399003 1555893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-1455288/.minikube/cert.pem
	I1128 00:05:01.399029 1555893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-1455288/.minikube/cert.pem (1123 bytes)
	I1128 00:05:01.399097 1555893 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-1455288/.minikube/key.pem, removing ...
	I1128 00:05:01.399100 1555893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-1455288/.minikube/key.pem
	I1128 00:05:01.399136 1555893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-1455288/.minikube/key.pem (1679 bytes)
	I1128 00:05:01.399191 1555893 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-986183 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube scheduled-stop-986183]
	I1128 00:05:01.970978 1555893 provision.go:172] copyRemoteCerts
	I1128 00:05:01.971032 1555893 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:05:01.971071 1555893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-986183
	I1128 00:05:01.991475 1555893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/scheduled-stop-986183/id_rsa Username:docker}
	I1128 00:05:02.089174 1555893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:05:02.118636 1555893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1128 00:05:02.149193 1555893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1128 00:05:02.180678 1555893 provision.go:86] duration metric: configureAuth took 800.732053ms
	I1128 00:05:02.180697 1555893 ubuntu.go:193] setting minikube options for container-runtime
	I1128 00:05:02.180897 1555893 config.go:182] Loaded profile config "scheduled-stop-986183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:05:02.181020 1555893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-986183
	I1128 00:05:02.199508 1555893 main.go:141] libmachine: Using SSH client type: native
	I1128 00:05:02.199927 1555893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1128 00:05:02.199943 1555893 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:05:02.449534 1555893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:05:02.449557 1555893 machine.go:91] provisioned docker machine in 1.454865888s
	I1128 00:05:02.449567 1555893 client.go:171] LocalClient.Create took 8.480571708s
	I1128 00:05:02.449581 1555893 start.go:167] duration metric: libmachine.API.Create for "scheduled-stop-986183" took 8.480622809s
	I1128 00:05:02.449594 1555893 start.go:300] post-start starting for "scheduled-stop-986183" (driver="docker")
	I1128 00:05:02.449604 1555893 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:05:02.449668 1555893 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:05:02.449710 1555893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-986183
	I1128 00:05:02.468233 1555893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/scheduled-stop-986183/id_rsa Username:docker}
	I1128 00:05:02.565966 1555893 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:05:02.570467 1555893 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1128 00:05:02.570493 1555893 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1128 00:05:02.570507 1555893 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1128 00:05:02.570513 1555893 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1128 00:05:02.570524 1555893 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-1455288/.minikube/addons for local assets ...
	I1128 00:05:02.570599 1555893 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-1455288/.minikube/files for local assets ...
	I1128 00:05:02.570691 1555893 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/14606522.pem -> 14606522.pem in /etc/ssl/certs
	I1128 00:05:02.570797 1555893 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:05:02.581885 1555893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/14606522.pem --> /etc/ssl/certs/14606522.pem (1708 bytes)
	I1128 00:05:02.610947 1555893 start.go:303] post-start completed in 161.338839ms
	I1128 00:05:02.611359 1555893 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-986183
	I1128 00:05:02.629136 1555893 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/config.json ...
	I1128 00:05:02.629436 1555893 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1128 00:05:02.629476 1555893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-986183
	I1128 00:05:02.648019 1555893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/scheduled-stop-986183/id_rsa Username:docker}
	I1128 00:05:02.740538 1555893 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1128 00:05:02.747189 1555893 start.go:128] duration metric: createHost completed in 8.780804486s
	I1128 00:05:02.747206 1555893 start.go:83] releasing machines lock for "scheduled-stop-986183", held for 8.780928391s
	I1128 00:05:02.747277 1555893 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-986183
	I1128 00:05:02.765468 1555893 ssh_runner.go:195] Run: cat /version.json
	I1128 00:05:02.765505 1555893 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:05:02.765523 1555893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-986183
	I1128 00:05:02.765553 1555893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-986183
	I1128 00:05:02.785432 1555893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/scheduled-stop-986183/id_rsa Username:docker}
	I1128 00:05:02.786371 1555893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/scheduled-stop-986183/id_rsa Username:docker}
	I1128 00:05:02.878688 1555893 ssh_runner.go:195] Run: systemctl --version
	I1128 00:05:03.020046 1555893 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:05:03.170799 1555893 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1128 00:05:03.177307 1555893 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:05:03.210667 1555893 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1128 00:05:03.210750 1555893 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:05:03.253941 1555893 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1128 00:05:03.253955 1555893 start.go:472] detecting cgroup driver to use...
	I1128 00:05:03.253985 1555893 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1128 00:05:03.254044 1555893 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:05:03.272794 1555893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:05:03.286712 1555893 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:05:03.286765 1555893 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:05:03.303252 1555893 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:05:03.321083 1555893 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 00:05:03.426143 1555893 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:05:03.533385 1555893 docker.go:219] disabling docker service ...
	I1128 00:05:03.533447 1555893 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:05:03.556478 1555893 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:05:03.570968 1555893 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:05:03.672940 1555893 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:05:03.783440 1555893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:05:03.798217 1555893 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:05:03.818841 1555893 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 00:05:03.818908 1555893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:05:03.831993 1555893 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 00:05:03.832064 1555893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:05:03.844756 1555893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:05:03.857999 1555893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:05:03.869921 1555893 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:05:03.881029 1555893 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:05:03.891361 1555893 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:05:03.901545 1555893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:05:03.999712 1555893 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 00:05:04.130093 1555893 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 00:05:04.130161 1555893 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 00:05:04.135084 1555893 start.go:540] Will wait 60s for crictl version
	I1128 00:05:04.135144 1555893 ssh_runner.go:195] Run: which crictl
	I1128 00:05:04.140216 1555893 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:05:04.183824 1555893 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1128 00:05:04.183902 1555893 ssh_runner.go:195] Run: crio --version
	I1128 00:05:04.229398 1555893 ssh_runner.go:195] Run: crio --version
	I1128 00:05:04.276280 1555893 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1128 00:05:04.278028 1555893 cli_runner.go:164] Run: docker network inspect scheduled-stop-986183 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1128 00:05:04.296436 1555893 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1128 00:05:04.301355 1555893 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:05:04.315227 1555893 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 00:05:04.315279 1555893 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:05:04.381065 1555893 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 00:05:04.381077 1555893 crio.go:415] Images already preloaded, skipping extraction
	I1128 00:05:04.381132 1555893 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:05:04.421431 1555893 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 00:05:04.421443 1555893 cache_images.go:84] Images are preloaded, skipping loading
	I1128 00:05:04.421514 1555893 ssh_runner.go:195] Run: crio config
	I1128 00:05:04.477151 1555893 cni.go:84] Creating CNI manager for ""
	I1128 00:05:04.477164 1555893 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1128 00:05:04.477198 1555893 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:05:04.477220 1555893 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-986183 NodeName:scheduled-stop-986183 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 00:05:04.477355 1555893 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "scheduled-stop-986183"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:05:04.477434 1555893 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=scheduled-stop-986183 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:scheduled-stop-986183 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 00:05:04.477492 1555893 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 00:05:04.488755 1555893 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:05:04.488833 1555893 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 00:05:04.499635 1555893 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (431 bytes)
	I1128 00:05:04.520673 1555893 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 00:05:04.542486 1555893 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1128 00:05:04.564026 1555893 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1128 00:05:04.568673 1555893 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:05:04.582337 1555893 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183 for IP: 192.168.67.2
	I1128 00:05:04.582359 1555893 certs.go:190] acquiring lock for shared ca certs: {Name:mk268ef230412b241734813f303d69d9b36c42ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:05:04.582496 1555893 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.key
	I1128 00:05:04.582538 1555893 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-1455288/.minikube/proxy-client-ca.key
	I1128 00:05:04.582591 1555893 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/client.key
	I1128 00:05:04.582600 1555893 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/client.crt with IP's: []
	I1128 00:05:04.820328 1555893 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/client.crt ...
	I1128 00:05:04.820344 1555893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/client.crt: {Name:mk7b4432cc20f683bfa401f276ac2fc156c14920 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:05:04.820550 1555893 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/client.key ...
	I1128 00:05:04.820559 1555893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/client.key: {Name:mk74985108885a1cfe8287899dadfc2bbb58062c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:05:04.820655 1555893 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/apiserver.key.c7fa3a9e
	I1128 00:05:04.820667 1555893 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1128 00:05:05.164817 1555893 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/apiserver.crt.c7fa3a9e ...
	I1128 00:05:05.164834 1555893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/apiserver.crt.c7fa3a9e: {Name:mk8dc8355d617403e6e7516ef9e0ccc30fa7543d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:05:05.165054 1555893 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/apiserver.key.c7fa3a9e ...
	I1128 00:05:05.165067 1555893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/apiserver.key.c7fa3a9e: {Name:mkc830181d9ff59d3c83bce0a77b229890e70185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:05:05.165150 1555893 certs.go:337] copying /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/apiserver.crt
	I1128 00:05:05.165229 1555893 certs.go:341] copying /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/apiserver.key
	I1128 00:05:05.165288 1555893 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/proxy-client.key
	I1128 00:05:05.165299 1555893 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/proxy-client.crt with IP's: []
	I1128 00:05:05.404377 1555893 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/proxy-client.crt ...
	I1128 00:05:05.404393 1555893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/proxy-client.crt: {Name:mkaf91186d2d37f53e961ecafed9426d1b5dc009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:05:05.404583 1555893 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/proxy-client.key ...
	I1128 00:05:05.404591 1555893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/proxy-client.key: {Name:mkc749b1cb0b5b08975e7229bd390c8d6d63fe8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:05:05.404779 1555893 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/1460652.pem (1338 bytes)
	W1128 00:05:05.404813 1555893 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/1460652_empty.pem, impossibly tiny 0 bytes
	I1128 00:05:05.404822 1555893 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca-key.pem (1679 bytes)
	I1128 00:05:05.404844 1555893 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:05:05.404866 1555893 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:05:05.404895 1555893 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/key.pem (1679 bytes)
	I1128 00:05:05.404941 1555893 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/14606522.pem (1708 bytes)
	I1128 00:05:05.405609 1555893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 00:05:05.434615 1555893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 00:05:05.464083 1555893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 00:05:05.492799 1555893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/scheduled-stop-986183/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1128 00:05:05.521953 1555893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:05:05.551090 1555893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1128 00:05:05.580343 1555893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:05:05.608693 1555893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:05:05.638127 1555893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/1460652.pem --> /usr/share/ca-certificates/1460652.pem (1338 bytes)
	I1128 00:05:05.667437 1555893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/14606522.pem --> /usr/share/ca-certificates/14606522.pem (1708 bytes)
	I1128 00:05:05.696485 1555893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:05:05.725531 1555893 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 00:05:05.747388 1555893 ssh_runner.go:195] Run: openssl version
	I1128 00:05:05.754713 1555893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14606522.pem && ln -fs /usr/share/ca-certificates/14606522.pem /etc/ssl/certs/14606522.pem"
	I1128 00:05:05.767042 1555893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14606522.pem
	I1128 00:05:05.771879 1555893 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:38 /usr/share/ca-certificates/14606522.pem
	I1128 00:05:05.771944 1555893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14606522.pem
	I1128 00:05:05.780903 1555893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14606522.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:05:05.793734 1555893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:05:05.805641 1555893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:05:05.810418 1555893 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:31 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:05:05.810477 1555893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:05:05.819184 1555893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:05:05.831140 1555893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1460652.pem && ln -fs /usr/share/ca-certificates/1460652.pem /etc/ssl/certs/1460652.pem"
	I1128 00:05:05.843317 1555893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1460652.pem
	I1128 00:05:05.848029 1555893 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:38 /usr/share/ca-certificates/1460652.pem
	I1128 00:05:05.848084 1555893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1460652.pem
	I1128 00:05:05.856857 1555893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1460652.pem /etc/ssl/certs/51391683.0"
	I1128 00:05:05.869020 1555893 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:05:05.873390 1555893 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1128 00:05:05.873433 1555893 kubeadm.go:404] StartCluster: {Name:scheduled-stop-986183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:scheduled-stop-986183 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:05:05.873500 1555893 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 00:05:05.873555 1555893 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:05:05.919411 1555893 cri.go:89] found id: ""
	I1128 00:05:05.919473 1555893 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 00:05:05.930745 1555893 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:05:05.942010 1555893 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1128 00:05:05.942066 1555893 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:05:05.952857 1555893 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:05:05.952896 1555893 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1128 00:05:06.018956 1555893 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1128 00:05:06.019177 1555893 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 00:05:06.073753 1555893 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1128 00:05:06.073813 1555893 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1050-aws
	I1128 00:05:06.073845 1555893 kubeadm.go:322] OS: Linux
	I1128 00:05:06.073927 1555893 kubeadm.go:322] CGROUPS_CPU: enabled
	I1128 00:05:06.073972 1555893 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1128 00:05:06.074015 1555893 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1128 00:05:06.074058 1555893 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1128 00:05:06.074102 1555893 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1128 00:05:06.074148 1555893 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1128 00:05:06.074189 1555893 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1128 00:05:06.074233 1555893 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1128 00:05:06.074283 1555893 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1128 00:05:06.163136 1555893 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 00:05:06.163233 1555893 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 00:05:06.163325 1555893 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 00:05:06.420189 1555893 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 00:05:06.424347 1555893 out.go:204]   - Generating certificates and keys ...
	I1128 00:05:06.424586 1555893 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 00:05:06.424646 1555893 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 00:05:06.574566 1555893 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1128 00:05:06.941949 1555893 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1128 00:05:07.536656 1555893 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1128 00:05:07.875578 1555893 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1128 00:05:08.336801 1555893 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1128 00:05:08.337111 1555893 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-986183] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1128 00:05:08.971788 1555893 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1128 00:05:08.972142 1555893 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-986183] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1128 00:05:09.721117 1555893 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1128 00:05:10.040726 1555893 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1128 00:05:10.326979 1555893 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1128 00:05:10.327247 1555893 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 00:05:10.761250 1555893 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 00:05:11.173442 1555893 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 00:05:11.319596 1555893 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 00:05:11.598339 1555893 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 00:05:11.598997 1555893 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 00:05:11.603346 1555893 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 00:05:11.605374 1555893 out.go:204]   - Booting up control plane ...
	I1128 00:05:11.605498 1555893 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 00:05:11.605569 1555893 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 00:05:11.606109 1555893 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 00:05:11.616835 1555893 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 00:05:11.617628 1555893 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 00:05:11.617860 1555893 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 00:05:11.721022 1555893 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 00:05:19.223091 1555893 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502126 seconds
	I1128 00:05:19.223198 1555893 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 00:05:19.239717 1555893 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 00:05:19.765966 1555893 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 00:05:19.766161 1555893 kubeadm.go:322] [mark-control-plane] Marking the node scheduled-stop-986183 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 00:05:20.276992 1555893 kubeadm.go:322] [bootstrap-token] Using token: ze2hqd.ti2kifysv11168qt
	I1128 00:05:20.279005 1555893 out.go:204]   - Configuring RBAC rules ...
	I1128 00:05:20.279122 1555893 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 00:05:20.284691 1555893 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 00:05:20.294278 1555893 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 00:05:20.299974 1555893 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 00:05:20.304728 1555893 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 00:05:20.308774 1555893 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 00:05:20.323710 1555893 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 00:05:20.572503 1555893 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 00:05:20.733637 1555893 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 00:05:20.733649 1555893 kubeadm.go:322] 
	I1128 00:05:20.733704 1555893 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 00:05:20.733708 1555893 kubeadm.go:322] 
	I1128 00:05:20.733779 1555893 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 00:05:20.733783 1555893 kubeadm.go:322] 
	I1128 00:05:20.733805 1555893 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 00:05:20.733893 1555893 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 00:05:20.733941 1555893 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 00:05:20.733944 1555893 kubeadm.go:322] 
	I1128 00:05:20.733994 1555893 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 00:05:20.733998 1555893 kubeadm.go:322] 
	I1128 00:05:20.734041 1555893 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 00:05:20.734044 1555893 kubeadm.go:322] 
	I1128 00:05:20.734092 1555893 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 00:05:20.734160 1555893 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 00:05:20.734222 1555893 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 00:05:20.734226 1555893 kubeadm.go:322] 
	I1128 00:05:20.734305 1555893 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 00:05:20.734375 1555893 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 00:05:20.734379 1555893 kubeadm.go:322] 
	I1128 00:05:20.734456 1555893 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ze2hqd.ti2kifysv11168qt \
	I1128 00:05:20.734559 1555893 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4bf7580b26a5e006cb8545a36d546acc708a5c0d8ea7cd28dd99f58e9fcb6509 \
	I1128 00:05:20.734578 1555893 kubeadm.go:322] 	--control-plane 
	I1128 00:05:20.734581 1555893 kubeadm.go:322] 
	I1128 00:05:20.734661 1555893 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 00:05:20.734664 1555893 kubeadm.go:322] 
	I1128 00:05:20.734739 1555893 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ze2hqd.ti2kifysv11168qt \
	I1128 00:05:20.734833 1555893 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4bf7580b26a5e006cb8545a36d546acc708a5c0d8ea7cd28dd99f58e9fcb6509 
	I1128 00:05:20.737207 1555893 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1050-aws\n", err: exit status 1
	I1128 00:05:20.737310 1555893 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 00:05:20.737325 1555893 cni.go:84] Creating CNI manager for ""
	I1128 00:05:20.737332 1555893 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1128 00:05:20.739354 1555893 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1128 00:05:20.741561 1555893 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1128 00:05:20.759075 1555893 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1128 00:05:20.759086 1555893 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1128 00:05:20.820706 1555893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1128 00:05:21.678941 1555893 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 00:05:21.679071 1555893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:05:21.679143 1555893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=scheduled-stop-986183 minikube.k8s.io/updated_at=2023_11_28T00_05_21_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:05:21.836547 1555893 ops.go:34] apiserver oom_adj: -16
	I1128 00:05:21.836588 1555893 kubeadm.go:1081] duration metric: took 157.570441ms to wait for elevateKubeSystemPrivileges.
	I1128 00:05:21.836600 1555893 kubeadm.go:406] StartCluster complete in 15.963172512s
	I1128 00:05:21.836615 1555893 settings.go:142] acquiring lock: {Name:mk2effde19f5a08dd61e438cec70b0751f0f2f09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:05:21.836675 1555893 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-1455288/kubeconfig
	I1128 00:05:21.837367 1555893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-1455288/kubeconfig: {Name:mk024e2b9ecd216772e0b17d0d1d16e859027716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:05:21.839168 1555893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 00:05:21.839430 1555893 config.go:182] Loaded profile config "scheduled-stop-986183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:05:21.839466 1555893 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 00:05:21.839525 1555893 addons.go:69] Setting storage-provisioner=true in profile "scheduled-stop-986183"
	I1128 00:05:21.839541 1555893 addons.go:231] Setting addon storage-provisioner=true in "scheduled-stop-986183"
	I1128 00:05:21.839593 1555893 host.go:66] Checking if "scheduled-stop-986183" exists ...
	I1128 00:05:21.840067 1555893 cli_runner.go:164] Run: docker container inspect scheduled-stop-986183 --format={{.State.Status}}
	I1128 00:05:21.840214 1555893 addons.go:69] Setting default-storageclass=true in profile "scheduled-stop-986183"
	I1128 00:05:21.840232 1555893 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-986183"
	I1128 00:05:21.840518 1555893 cli_runner.go:164] Run: docker container inspect scheduled-stop-986183 --format={{.State.Status}}
	I1128 00:05:21.896231 1555893 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:05:21.898633 1555893 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:05:21.898645 1555893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 00:05:21.898709 1555893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-986183
	I1128 00:05:21.901349 1555893 addons.go:231] Setting addon default-storageclass=true in "scheduled-stop-986183"
	I1128 00:05:21.901377 1555893 host.go:66] Checking if "scheduled-stop-986183" exists ...
	I1128 00:05:21.901937 1555893 cli_runner.go:164] Run: docker container inspect scheduled-stop-986183 --format={{.State.Status}}
	I1128 00:05:21.913490 1555893 kapi.go:248] "coredns" deployment in "kube-system" namespace and "scheduled-stop-986183" context rescaled to 1 replicas
	I1128 00:05:21.913517 1555893 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 00:05:21.915656 1555893 out.go:177] * Verifying Kubernetes components...
	I1128 00:05:21.918419 1555893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:05:21.983893 1555893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/scheduled-stop-986183/id_rsa Username:docker}
	I1128 00:05:21.991705 1555893 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 00:05:21.991717 1555893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 00:05:21.991785 1555893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-986183
	I1128 00:05:22.031367 1555893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/scheduled-stop-986183/id_rsa Username:docker}
	I1128 00:05:22.051875 1555893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 00:05:22.052865 1555893 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:05:22.052924 1555893 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:05:22.195061 1555893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:05:22.291140 1555893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 00:05:22.431012 1555893 start.go:926] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS's ConfigMap
	I1128 00:05:22.431049 1555893 api_server.go:72] duration metric: took 517.48557ms to wait for apiserver process to appear ...
	I1128 00:05:22.431058 1555893 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:05:22.431072 1555893 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1128 00:05:22.453940 1555893 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1128 00:05:22.458202 1555893 api_server.go:141] control plane version: v1.28.4
	I1128 00:05:22.458219 1555893 api_server.go:131] duration metric: took 27.156988ms to wait for apiserver health ...
	I1128 00:05:22.458226 1555893 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:05:22.466790 1555893 system_pods.go:59] 4 kube-system pods found
	I1128 00:05:22.466813 1555893 system_pods.go:61] "etcd-scheduled-stop-986183" [c9381173-1304-4714-b81d-b3b3e8f0652e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 00:05:22.466820 1555893 system_pods.go:61] "kube-apiserver-scheduled-stop-986183" [e8918047-effe-44e7-bbff-52b3616a9b6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 00:05:22.466828 1555893 system_pods.go:61] "kube-controller-manager-scheduled-stop-986183" [f78a66af-ec77-428a-b3b2-4a6ec9e7c89e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 00:05:22.466835 1555893 system_pods.go:61] "kube-scheduler-scheduled-stop-986183" [83dc2d7f-8556-430f-b7a9-dc31672ddd22] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 00:05:22.466842 1555893 system_pods.go:74] duration metric: took 8.610446ms to wait for pod list to return data ...
	I1128 00:05:22.466851 1555893 kubeadm.go:581] duration metric: took 553.291158ms to wait for : map[apiserver:true system_pods:true] ...
	I1128 00:05:22.466862 1555893 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:05:22.476146 1555893 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1128 00:05:22.476168 1555893 node_conditions.go:123] node cpu capacity is 2
	I1128 00:05:22.476180 1555893 node_conditions.go:105] duration metric: took 9.313431ms to run NodePressure ...
	I1128 00:05:22.476191 1555893 start.go:228] waiting for startup goroutines ...
	I1128 00:05:22.794510 1555893 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1128 00:05:22.796168 1555893 addons.go:502] enable addons completed in 956.694946ms: enabled=[storage-provisioner default-storageclass]
	I1128 00:05:22.796204 1555893 start.go:233] waiting for cluster config update ...
	I1128 00:05:22.796216 1555893 start.go:242] writing updated cluster config ...
	I1128 00:05:22.796560 1555893 ssh_runner.go:195] Run: rm -f paused
	I1128 00:05:22.855261 1555893 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 00:05:22.857142 1555893 out.go:177] * Done! kubectl is now configured to use "scheduled-stop-986183" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 28 00:05:13 scheduled-stop-986183 crio[900]: time="2023-11-28 00:05:13.234286058Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419,RepoTags:[registry.k8s.io/kube-apiserver:v1.28.4],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2],Size_:121119694,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=98a5bdc8-46ef-4de0-b995-28e8b4e2005f name=/runtime.v1.ImageService/ImageStatus
	Nov 28 00:05:13 scheduled-stop-986183 crio[900]: time="2023-11-28 00:05:13.233925594Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.5.9-0" id=1daac4ba-e000-45c8-89f8-fd1bdaa6bd94 name=/runtime.v1.ImageService/ImageStatus
	Nov 28 00:05:13 scheduled-stop-986183 crio[900]: time="2023-11-28 00:05:13.234432748Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace,RepoTags:[registry.k8s.io/etcd:3.5.9-0],RepoDigests:[registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3 registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b],Size_:182203183,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=1daac4ba-e000-45c8-89f8-fd1bdaa6bd94 name=/runtime.v1.ImageService/ImageStatus
	Nov 28 00:05:13 scheduled-stop-986183 crio[900]: time="2023-11-28 00:05:13.236123317Z" level=info msg="Creating container: kube-system/etcd-scheduled-stop-986183/etcd" id=899196e2-b77a-47dc-be5e-93e61a9d46ce name=/runtime.v1.RuntimeService/CreateContainer
	Nov 28 00:05:13 scheduled-stop-986183 crio[900]: time="2023-11-28 00:05:13.236216953Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 28 00:05:13 scheduled-stop-986183 crio[900]: time="2023-11-28 00:05:13.237899103Z" level=info msg="Creating container: kube-system/kube-controller-manager-scheduled-stop-986183/kube-controller-manager" id=59878822-5a27-432f-9ad6-43089d032eab name=/runtime.v1.RuntimeService/CreateContainer
	Nov 28 00:05:13 scheduled-stop-986183 crio[900]: time="2023-11-28 00:05:13.237990434Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 28 00:05:13 scheduled-stop-986183 crio[900]: time="2023-11-28 00:05:13.237902722Z" level=info msg="Creating container: kube-system/kube-scheduler-scheduled-stop-986183/kube-scheduler" id=118c270f-cb7a-4b9a-8555-a83a3908282b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 28 00:05:13 scheduled-stop-986183 crio[900]: time="2023-11-28 00:05:13.238035488Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 28 00:05:13 scheduled-stop-986183 crio[900]: time="2023-11-28 00:05:13.238754948Z" level=info msg="Creating container: kube-system/kube-apiserver-scheduled-stop-986183/kube-apiserver" id=c11fd5b1-5fa7-47bd-8789-98c2776e1dfc name=/runtime.v1.RuntimeService/CreateContainer
	Nov 28 00:05:13 scheduled-stop-986183 crio[900]: time="2023-11-28 00:05:13.238831320Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 28 00:05:13 scheduled-stop-986183 crio[900]: time="2023-11-28 00:05:13.409504012Z" level=info msg="Created container 73b0ca5726f8c65d4e21e8693a51bd47bd78d44d4527b04e5e537b7fec8e877d: kube-system/kube-apiserver-scheduled-stop-986183/kube-apiserver" id=c11fd5b1-5fa7-47bd-8789-98c2776e1dfc name=/runtime.v1.RuntimeService/CreateContainer
	Nov 28 00:05:13 scheduled-stop-986183 crio[900]: time="2023-11-28 00:05:13.410336571Z" level=info msg="Starting container: 73b0ca5726f8c65d4e21e8693a51bd47bd78d44d4527b04e5e537b7fec8e877d" id=01277cf5-4176-4510-a049-f362f882bd09 name=/runtime.v1.RuntimeService/StartContainer
	Nov 28 00:05:13 scheduled-stop-986183 crio[900]: time="2023-11-28 00:05:13.411314244Z" level=info msg="Created container cf9f12b03106bdeaedd91483871c11deb31d77ac88e3d760659d8df9af773801: kube-system/etcd-scheduled-stop-986183/etcd" id=899196e2-b77a-47dc-be5e-93e61a9d46ce name=/runtime.v1.RuntimeService/CreateContainer
	Nov 28 00:05:13 scheduled-stop-986183 crio[900]: time="2023-11-28 00:05:13.411804134Z" level=info msg="Starting container: cf9f12b03106bdeaedd91483871c11deb31d77ac88e3d760659d8df9af773801" id=214b34a4-0b10-4ca8-a4a9-00f116bb1ea9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 28 00:05:13 scheduled-stop-986183 crio[900]: time="2023-11-28 00:05:13.429747252Z" level=info msg="Started container" PID=1274 containerID=73b0ca5726f8c65d4e21e8693a51bd47bd78d44d4527b04e5e537b7fec8e877d description=kube-system/kube-apiserver-scheduled-stop-986183/kube-apiserver id=01277cf5-4176-4510-a049-f362f882bd09 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e6c36f81784ec6fcd1ce9a02da5d6f5b32a3253f549b43f346681dd284b24bfb
	Nov 28 00:05:13 scheduled-stop-986183 crio[900]: time="2023-11-28 00:05:13.430493559Z" level=info msg="Created container 7e83e778e0789bf7d99a16eab0e2e8425ef1857f401a8d46c5aba96aee787ad3: kube-system/kube-controller-manager-scheduled-stop-986183/kube-controller-manager" id=59878822-5a27-432f-9ad6-43089d032eab name=/runtime.v1.RuntimeService/CreateContainer
	Nov 28 00:05:13 scheduled-stop-986183 crio[900]: time="2023-11-28 00:05:13.431069159Z" level=info msg="Starting container: 7e83e778e0789bf7d99a16eab0e2e8425ef1857f401a8d46c5aba96aee787ad3" id=d47af599-dc15-4500-bb09-3917e8ecfd99 name=/runtime.v1.RuntimeService/StartContainer
	Nov 28 00:05:13 scheduled-stop-986183 crio[900]: time="2023-11-28 00:05:13.434515215Z" level=info msg="Started container" PID=1242 containerID=cf9f12b03106bdeaedd91483871c11deb31d77ac88e3d760659d8df9af773801 description=kube-system/etcd-scheduled-stop-986183/etcd id=214b34a4-0b10-4ca8-a4a9-00f116bb1ea9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ebeb04ba5fd320ba9bf9d207bf57eb6ba157c471e758fc996068ef11abdb681e
	Nov 28 00:05:13 scheduled-stop-986183 crio[900]: time="2023-11-28 00:05:13.454492726Z" level=info msg="Created container f0fd11cbbd4a0325e8a32469a659bbcaceb56dbfbddee208be61733034787ecc: kube-system/kube-scheduler-scheduled-stop-986183/kube-scheduler" id=118c270f-cb7a-4b9a-8555-a83a3908282b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 28 00:05:13 scheduled-stop-986183 crio[900]: time="2023-11-28 00:05:13.455143214Z" level=info msg="Starting container: f0fd11cbbd4a0325e8a32469a659bbcaceb56dbfbddee208be61733034787ecc" id=cae5ac10-5e3f-4997-bb3d-432ccd554d4e name=/runtime.v1.RuntimeService/StartContainer
	Nov 28 00:05:13 scheduled-stop-986183 crio[900]: time="2023-11-28 00:05:13.472983588Z" level=info msg="Started container" PID=1248 containerID=7e83e778e0789bf7d99a16eab0e2e8425ef1857f401a8d46c5aba96aee787ad3 description=kube-system/kube-controller-manager-scheduled-stop-986183/kube-controller-manager id=d47af599-dc15-4500-bb09-3917e8ecfd99 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dc8449c73492fa753f03e1ed68beee236351477ffbb8a0a21477fab572dc90c1
	Nov 28 00:05:13 scheduled-stop-986183 crio[900]: time="2023-11-28 00:05:13.496095296Z" level=info msg="Started container" PID=1264 containerID=f0fd11cbbd4a0325e8a32469a659bbcaceb56dbfbddee208be61733034787ecc description=kube-system/kube-scheduler-scheduled-stop-986183/kube-scheduler id=cae5ac10-5e3f-4997-bb3d-432ccd554d4e name=/runtime.v1.RuntimeService/StartContainer sandboxID=6639e654454a893728c2224901292f445aebd613f0f225807937d464a67b63b7
	Nov 28 00:05:20 scheduled-stop-986183 crio[900]: time="2023-11-28 00:05:20.650525783Z" level=info msg="Checking image status: registry.k8s.io/pause:3.9" id=6b267949-a35b-446b-997f-8ea67eb3d3f1 name=/runtime.v1.ImageService/ImageStatus
	Nov 28 00:05:20 scheduled-stop-986183 crio[900]: time="2023-11-28 00:05:20.650689301Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6 registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097],Size_:520014,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},Info:map[string]string{},}" id=6b267949-a35b-446b-997f-8ea67eb3d3f1 name=/runtime.v1.ImageService/ImageStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	73b0ca5726f8c       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419   11 seconds ago      Running             kube-apiserver            0                   e6c36f81784ec       kube-apiserver-scheduled-stop-986183
	f0fd11cbbd4a0       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54   11 seconds ago      Running             kube-scheduler            0                   6639e654454a8       kube-scheduler-scheduled-stop-986183
	7e83e778e0789       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b   11 seconds ago      Running             kube-controller-manager   0                   dc8449c73492f       kube-controller-manager-scheduled-stop-986183
	cf9f12b03106b       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   11 seconds ago      Running             etcd                      0                   ebeb04ba5fd32       etcd-scheduled-stop-986183
	
	* 
	* ==> describe nodes <==
	* Name:               scheduled-stop-986183
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=scheduled-stop-986183
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45
	                    minikube.k8s.io/name=scheduled-stop-986183
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T00_05_21_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 00:05:17 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-986183
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 00:05:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 00:05:20 +0000   Tue, 28 Nov 2023 00:05:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 00:05:20 +0000   Tue, 28 Nov 2023 00:05:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 00:05:20 +0000   Tue, 28 Nov 2023 00:05:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 28 Nov 2023 00:05:20 +0000   Tue, 28 Nov 2023 00:05:14 +0000   KubeletNotReady              [container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?]
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    scheduled-stop-986183
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 73451f1f75a94588a770362fe170679c
	  System UUID:                49841136-f86b-4edc-ad5d-c70820395031
	  Boot ID:                    eb10cf4d-5884-4052-85dd-9e7b7999f82d
	  Kernel Version:             5.15.0-1050-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-986183                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         4s
	  kube-system                 kube-apiserver-scheduled-stop-986183             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-controller-manager-scheduled-stop-986183    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-scheduler-scheduled-stop-986183             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  NodeHasSufficientMemory  12s (x8 over 12s)  kubelet  Node scheduled-stop-986183 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet  Node scheduled-stop-986183 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x8 over 12s)  kubelet  Node scheduled-stop-986183 status is now: NodeHasSufficientPID
	  Normal  Starting                 4s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  4s                 kubelet  Node scheduled-stop-986183 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4s                 kubelet  Node scheduled-stop-986183 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4s                 kubelet  Node scheduled-stop-986183 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [  +0.001173] FS-Cache: O-key=[8] '7bd7c90000000000'
	[  +0.000758] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001002] FS-Cache: N-cookie d=00000000fe658175{9p.inode} n=00000000124770db
	[  +0.001086] FS-Cache: N-key=[8] '7bd7c90000000000'
	[  +2.367044] FS-Cache: Duplicate cookie detected
	[  +0.000741] FS-Cache: O-cookie c=0000004d [p=0000004b fl=226 nc=0 na=1]
	[  +0.001026] FS-Cache: O-cookie d=00000000fe658175{9p.inode} n=0000000014c47df7
	[  +0.001148] FS-Cache: O-key=[8] '7ad7c90000000000'
	[  +0.000733] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001050] FS-Cache: N-cookie d=00000000fe658175{9p.inode} n=00000000ce1a5764
	[  +0.001129] FS-Cache: N-key=[8] '7ad7c90000000000'
	[  +0.423214] FS-Cache: Duplicate cookie detected
	[  +0.000771] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001066] FS-Cache: O-cookie d=00000000fe658175{9p.inode} n=00000000c7b9da3e
	[  +0.001094] FS-Cache: O-key=[8] '80d7c90000000000'
	[  +0.000747] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=00000000fe658175{9p.inode} n=00000000e29de338
	[  +0.001127] FS-Cache: N-key=[8] '80d7c90000000000'
	[  +4.315058] FS-Cache: Duplicate cookie detected
	[  +0.000758] FS-Cache: O-cookie c=0000005a [p=00000002 fl=222 nc=0 na=1]
	[  +0.001009] FS-Cache: O-cookie d=000000006e56f75c{9P.session} n=00000000db26fcaf
	[  +0.001116] FS-Cache: O-key=[10] '34333030363632333434'
	[  +0.000817] FS-Cache: N-cookie c=0000005b [p=00000002 fl=2 nc=0 na=1]
	[  +0.001032] FS-Cache: N-cookie d=000000006e56f75c{9P.session} n=00000000f4cdfc72
	[  +0.001142] FS-Cache: N-key=[10] '34333030363632333434'
	
	* 
	* ==> etcd [cf9f12b03106bdeaedd91483871c11deb31d77ac88e3d760659d8df9af773801] <==
	* {"level":"info","ts":"2023-11-28T00:05:13.557414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-11-28T00:05:13.557563Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-11-28T00:05:13.569156Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-28T00:05:13.569196Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-11-28T00:05:13.56942Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-11-28T00:05:13.570123Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-28T00:05:13.570217Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-28T00:05:13.937906Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-28T00:05:13.93802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-28T00:05:13.938075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2023-11-28T00:05:13.938114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2023-11-28T00:05:13.938147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-11-28T00:05:13.938186Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2023-11-28T00:05:13.938224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-11-28T00:05:13.939368Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:scheduled-stop-986183 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-28T00:05:13.939568Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T00:05:13.950348Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-28T00:05:13.950475Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T00:05:13.951397Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-11-28T00:05:13.957918Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T00:05:13.965945Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-28T00:05:13.96602Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-28T00:05:13.966083Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T00:05:13.966185Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T00:05:13.966235Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  00:05:24 up  6:47,  0 users,  load average: 2.43, 1.84, 1.93
	Linux scheduled-stop-986183 5.15.0-1050-aws #55~20.04.1-Ubuntu SMP Mon Nov 6 12:18:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kube-apiserver [73b0ca5726f8c65d4e21e8693a51bd47bd78d44d4527b04e5e537b7fec8e877d] <==
	* I1128 00:05:17.359003       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1128 00:05:17.359325       1 aggregator.go:166] initial CRD sync complete...
	I1128 00:05:17.359375       1 autoregister_controller.go:141] Starting autoregister controller
	I1128 00:05:17.359403       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1128 00:05:17.359449       1 cache.go:39] Caches are synced for autoregister controller
	I1128 00:05:17.450236       1 shared_informer.go:318] Caches are synced for configmaps
	I1128 00:05:17.450419       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1128 00:05:17.450457       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1128 00:05:17.450470       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E1128 00:05:17.451649       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1128 00:05:17.453553       1 controller.go:624] quota admission added evaluator for: namespaces
	I1128 00:05:17.656282       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1128 00:05:18.160508       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1128 00:05:18.166505       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1128 00:05:18.166532       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1128 00:05:18.660530       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1128 00:05:18.708453       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1128 00:05:18.798252       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1128 00:05:18.804465       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I1128 00:05:18.805571       1 controller.go:624] quota admission added evaluator for: endpoints
	I1128 00:05:18.810403       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1128 00:05:19.271465       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1128 00:05:20.556512       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1128 00:05:20.570982       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1128 00:05:20.590971       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [7e83e778e0789bf7d99a16eab0e2e8425ef1857f401a8d46c5aba96aee787ad3] <==
	* I1128 00:05:14.530470       1 serving.go:348] Generated self-signed cert in-memory
	I1128 00:05:15.804561       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I1128 00:05:15.804591       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 00:05:15.806534       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1128 00:05:15.806670       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1128 00:05:15.806820       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1128 00:05:15.806990       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1128 00:05:19.262803       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I1128 00:05:19.362985       1 shared_informer.go:318] Caches are synced for tokens
	
	* 
	* ==> kube-scheduler [f0fd11cbbd4a0325e8a32469a659bbcaceb56dbfbddee208be61733034787ecc] <==
	* W1128 00:05:17.409674       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1128 00:05:17.409713       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1128 00:05:17.410385       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1128 00:05:17.410464       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1128 00:05:17.410569       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1128 00:05:17.410614       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1128 00:05:17.410698       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1128 00:05:17.410735       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1128 00:05:17.410887       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1128 00:05:17.410932       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1128 00:05:18.208431       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1128 00:05:18.208464       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1128 00:05:18.270361       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1128 00:05:18.270397       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1128 00:05:18.316798       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1128 00:05:18.316921       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1128 00:05:18.318704       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1128 00:05:18.318797       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1128 00:05:18.346225       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1128 00:05:18.346337       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1128 00:05:18.383216       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1128 00:05:18.383331       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1128 00:05:18.497540       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1128 00:05:18.497573       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1128 00:05:21.357859       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 28 00:05:20 scheduled-stop-986183 kubelet[1389]: I1128 00:05:20.968192    1389 topology_manager.go:215] "Topology Admit Handler" podUID="4d78c1a3ac1cf78650c4338a8ce5a2d6" podNamespace="kube-system" podName="etcd-scheduled-stop-986183"
	Nov 28 00:05:20 scheduled-stop-986183 kubelet[1389]: I1128 00:05:20.968331    1389 topology_manager.go:215] "Topology Admit Handler" podUID="2570c9b12d08123e4401152f057afc34" podNamespace="kube-system" podName="kube-apiserver-scheduled-stop-986183"
	Nov 28 00:05:20 scheduled-stop-986183 kubelet[1389]: I1128 00:05:20.968378    1389 topology_manager.go:215] "Topology Admit Handler" podUID="5bcacfc3842546c196fc152eee786fc2" podNamespace="kube-system" podName="kube-controller-manager-scheduled-stop-986183"
	Nov 28 00:05:20 scheduled-stop-986183 kubelet[1389]: I1128 00:05:20.968419    1389 topology_manager.go:215] "Topology Admit Handler" podUID="f6ec8dfd0da37b5be93882244b21250a" podNamespace="kube-system" podName="kube-scheduler-scheduled-stop-986183"
	Nov 28 00:05:21 scheduled-stop-986183 kubelet[1389]: I1128 00:05:21.062684    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bcacfc3842546c196fc152eee786fc2-flexvolume-dir\") pod \"kube-controller-manager-scheduled-stop-986183\" (UID: \"5bcacfc3842546c196fc152eee786fc2\") " pod="kube-system/kube-controller-manager-scheduled-stop-986183"
	Nov 28 00:05:21 scheduled-stop-986183 kubelet[1389]: I1128 00:05:21.062744    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bcacfc3842546c196fc152eee786fc2-kubeconfig\") pod \"kube-controller-manager-scheduled-stop-986183\" (UID: \"5bcacfc3842546c196fc152eee786fc2\") " pod="kube-system/kube-controller-manager-scheduled-stop-986183"
	Nov 28 00:05:21 scheduled-stop-986183 kubelet[1389]: I1128 00:05:21.062778    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f6ec8dfd0da37b5be93882244b21250a-kubeconfig\") pod \"kube-scheduler-scheduled-stop-986183\" (UID: \"f6ec8dfd0da37b5be93882244b21250a\") " pod="kube-system/kube-scheduler-scheduled-stop-986183"
	Nov 28 00:05:21 scheduled-stop-986183 kubelet[1389]: I1128 00:05:21.062803    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/4d78c1a3ac1cf78650c4338a8ce5a2d6-etcd-certs\") pod \"etcd-scheduled-stop-986183\" (UID: \"4d78c1a3ac1cf78650c4338a8ce5a2d6\") " pod="kube-system/etcd-scheduled-stop-986183"
	Nov 28 00:05:21 scheduled-stop-986183 kubelet[1389]: I1128 00:05:21.062831    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bcacfc3842546c196fc152eee786fc2-k8s-certs\") pod \"kube-controller-manager-scheduled-stop-986183\" (UID: \"5bcacfc3842546c196fc152eee786fc2\") " pod="kube-system/kube-controller-manager-scheduled-stop-986183"
	Nov 28 00:05:21 scheduled-stop-986183 kubelet[1389]: I1128 00:05:21.062866    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bcacfc3842546c196fc152eee786fc2-usr-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-986183\" (UID: \"5bcacfc3842546c196fc152eee786fc2\") " pod="kube-system/kube-controller-manager-scheduled-stop-986183"
	Nov 28 00:05:21 scheduled-stop-986183 kubelet[1389]: I1128 00:05:21.062899    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bcacfc3842546c196fc152eee786fc2-etc-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-986183\" (UID: \"5bcacfc3842546c196fc152eee786fc2\") " pod="kube-system/kube-controller-manager-scheduled-stop-986183"
	Nov 28 00:05:21 scheduled-stop-986183 kubelet[1389]: I1128 00:05:21.062933    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2570c9b12d08123e4401152f057afc34-ca-certs\") pod \"kube-apiserver-scheduled-stop-986183\" (UID: \"2570c9b12d08123e4401152f057afc34\") " pod="kube-system/kube-apiserver-scheduled-stop-986183"
	Nov 28 00:05:21 scheduled-stop-986183 kubelet[1389]: I1128 00:05:21.062962    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2570c9b12d08123e4401152f057afc34-etc-ca-certificates\") pod \"kube-apiserver-scheduled-stop-986183\" (UID: \"2570c9b12d08123e4401152f057afc34\") " pod="kube-system/kube-apiserver-scheduled-stop-986183"
	Nov 28 00:05:21 scheduled-stop-986183 kubelet[1389]: I1128 00:05:21.062987    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2570c9b12d08123e4401152f057afc34-k8s-certs\") pod \"kube-apiserver-scheduled-stop-986183\" (UID: \"2570c9b12d08123e4401152f057afc34\") " pod="kube-system/kube-apiserver-scheduled-stop-986183"
	Nov 28 00:05:21 scheduled-stop-986183 kubelet[1389]: I1128 00:05:21.063012    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2570c9b12d08123e4401152f057afc34-usr-local-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-986183\" (UID: \"2570c9b12d08123e4401152f057afc34\") " pod="kube-system/kube-apiserver-scheduled-stop-986183"
	Nov 28 00:05:21 scheduled-stop-986183 kubelet[1389]: I1128 00:05:21.063037    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2570c9b12d08123e4401152f057afc34-usr-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-986183\" (UID: \"2570c9b12d08123e4401152f057afc34\") " pod="kube-system/kube-apiserver-scheduled-stop-986183"
	Nov 28 00:05:21 scheduled-stop-986183 kubelet[1389]: I1128 00:05:21.063069    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bcacfc3842546c196fc152eee786fc2-ca-certs\") pod \"kube-controller-manager-scheduled-stop-986183\" (UID: \"5bcacfc3842546c196fc152eee786fc2\") " pod="kube-system/kube-controller-manager-scheduled-stop-986183"
	Nov 28 00:05:21 scheduled-stop-986183 kubelet[1389]: I1128 00:05:21.063094    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bcacfc3842546c196fc152eee786fc2-usr-local-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-986183\" (UID: \"5bcacfc3842546c196fc152eee786fc2\") " pod="kube-system/kube-controller-manager-scheduled-stop-986183"
	Nov 28 00:05:21 scheduled-stop-986183 kubelet[1389]: I1128 00:05:21.063114    1389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/4d78c1a3ac1cf78650c4338a8ce5a2d6-etcd-data\") pod \"etcd-scheduled-stop-986183\" (UID: \"4d78c1a3ac1cf78650c4338a8ce5a2d6\") " pod="kube-system/etcd-scheduled-stop-986183"
	Nov 28 00:05:21 scheduled-stop-986183 kubelet[1389]: I1128 00:05:21.633029    1389 apiserver.go:52] "Watching apiserver"
	Nov 28 00:05:21 scheduled-stop-986183 kubelet[1389]: I1128 00:05:21.659274    1389 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Nov 28 00:05:21 scheduled-stop-986183 kubelet[1389]: I1128 00:05:21.861891    1389 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-986183" podStartSLOduration=1.861790256 podCreationTimestamp="2023-11-28 00:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-28 00:05:21.855604804 +0000 UTC m=+1.332648998" watchObservedRunningTime="2023-11-28 00:05:21.861790256 +0000 UTC m=+1.338834450"
	Nov 28 00:05:21 scheduled-stop-986183 kubelet[1389]: I1128 00:05:21.926501    1389 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-986183" podStartSLOduration=1.926455781 podCreationTimestamp="2023-11-28 00:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-28 00:05:21.889602639 +0000 UTC m=+1.366646832" watchObservedRunningTime="2023-11-28 00:05:21.926455781 +0000 UTC m=+1.403499975"
	Nov 28 00:05:21 scheduled-stop-986183 kubelet[1389]: I1128 00:05:21.955169    1389 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-986183" podStartSLOduration=1.955126207 podCreationTimestamp="2023-11-28 00:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-28 00:05:21.926909069 +0000 UTC m=+1.403953271" watchObservedRunningTime="2023-11-28 00:05:21.955126207 +0000 UTC m=+1.432170401"
	Nov 28 00:05:22 scheduled-stop-986183 kubelet[1389]: I1128 00:05:22.007006    1389 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-986183" podStartSLOduration=2.006962478 podCreationTimestamp="2023-11-28 00:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-28 00:05:21.955458421 +0000 UTC m=+1.432502632" watchObservedRunningTime="2023-11-28 00:05:22.006962478 +0000 UTC m=+1.484006672"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p scheduled-stop-986183 -n scheduled-stop-986183
helpers_test.go:261: (dbg) Run:  kubectl --context scheduled-stop-986183 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context scheduled-stop-986183 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context scheduled-stop-986183 describe pod storage-provisioner: exit status 1 (99.424181ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context scheduled-stop-986183 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-986183" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-986183
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-986183: (2.002009621s)
--- FAIL: TestScheduledStopUnix (34.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (96.64s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.752748976.exe start -p running-upgrade-372245 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1128 00:14:23.120236 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.752748976.exe start -p running-upgrade-372245 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m19.565593502s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-372245 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-372245 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (11.798423371s)

                                                
                                                
-- stdout --
	* [running-upgrade-372245] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-1455288/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-372245 in cluster running-upgrade-372245
	* Pulling base image ...
	* Updating the running docker "running-upgrade-372245" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 00:14:44.361217 1585395 out.go:296] Setting OutFile to fd 1 ...
	I1128 00:14:44.361550 1585395 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:14:44.361582 1585395 out.go:309] Setting ErrFile to fd 2...
	I1128 00:14:44.361601 1585395 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:14:44.361911 1585395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-1455288/.minikube/bin
	I1128 00:14:44.362872 1585395 out.go:303] Setting JSON to false
	I1128 00:14:44.364626 1585395 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25034,"bootTime":1701105451,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1128 00:14:44.364727 1585395 start.go:138] virtualization:  
	I1128 00:14:44.367174 1585395 out.go:177] * [running-upgrade-372245] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1128 00:14:44.369598 1585395 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1128 00:14:44.375652 1585395 out.go:177]   - MINIKUBE_LOCATION=17206
	I1128 00:14:44.375623 1585395 notify.go:220] Checking for updates...
	I1128 00:14:44.379815 1585395 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 00:14:44.381467 1585395 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-1455288/kubeconfig
	I1128 00:14:44.382955 1585395 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube
	I1128 00:14:44.384887 1585395 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1128 00:14:44.386595 1585395 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 00:14:44.388460 1585395 config.go:182] Loaded profile config "running-upgrade-372245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1128 00:14:44.390888 1585395 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1128 00:14:44.392413 1585395 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 00:14:44.433093 1585395 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1128 00:14:44.433187 1585395 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 00:14:44.579080 1585395 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I1128 00:14:44.582764 1585395 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:64 SystemTime:2023-11-28 00:14:44.566533022 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 00:14:44.582873 1585395 docker.go:295] overlay module found
	I1128 00:14:44.586096 1585395 out.go:177] * Using the docker driver based on existing profile
	I1128 00:14:44.587617 1585395 start.go:298] selected driver: docker
	I1128 00:14:44.587641 1585395 start.go:902] validating driver "docker" against &{Name:running-upgrade-372245 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-372245 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.82.22 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1128 00:14:44.587755 1585395 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 00:14:44.588441 1585395 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 00:14:44.675779 1585395 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:64 SystemTime:2023-11-28 00:14:44.66470303 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 00:14:44.676165 1585395 cni.go:84] Creating CNI manager for ""
	I1128 00:14:44.676183 1585395 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1128 00:14:44.676197 1585395 start_flags.go:323] config:
	{Name:running-upgrade-372245 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-372245 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.82.22 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1128 00:14:44.678579 1585395 out.go:177] * Starting control plane node running-upgrade-372245 in cluster running-upgrade-372245
	I1128 00:14:44.681449 1585395 cache.go:121] Beginning downloading kic base image for docker with crio
	I1128 00:14:44.683378 1585395 out.go:177] * Pulling base image ...
	I1128 00:14:44.685963 1585395 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1128 00:14:44.686045 1585395 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1128 00:14:44.717832 1585395 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I1128 00:14:44.718085 1585395 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I1128 00:14:44.718447 1585395 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W1128 00:14:44.753732 1585395 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1128 00:14:44.753910 1585395 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/running-upgrade-372245/config.json ...
	I1128 00:14:44.754012 1585395 cache.go:107] acquiring lock: {Name:mk8ccd0fab1c49199c5d4e88f19e9abb32997b5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:14:44.754118 1585395 cache.go:115] /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1128 00:14:44.754127 1585395 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 122.42µs
	I1128 00:14:44.754159 1585395 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1128 00:14:44.754170 1585395 cache.go:107] acquiring lock: {Name:mke4eb665ec75eb7289960775c1e952ef3e7ab8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:14:44.754181 1585395 cache.go:107] acquiring lock: {Name:mk98b3011fe8d30957a4cf06e4cf62edddfc5cfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:14:44.754371 1585395 cache.go:107] acquiring lock: {Name:mkeb262b26d69e0aaa055811148641ca37e16ba0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:14:44.754397 1585395 cache.go:107] acquiring lock: {Name:mk6857d19a8633c08ab6111154edace1255bf2f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:14:44.754599 1585395 cache.go:107] acquiring lock: {Name:mkfec010d9b7d1ee03eac327c332c622595a4af6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:14:44.754627 1585395 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1128 00:14:44.754743 1585395 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1128 00:14:44.754889 1585395 cache.go:107] acquiring lock: {Name:mk71bbe64b3f5979383f62d8de90db74713fbdef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:14:44.754373 1585395 cache.go:107] acquiring lock: {Name:mk8cd01a7e2295f51edd87639073ca5694d47e88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:14:44.755364 1585395 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I1128 00:14:44.755581 1585395 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I1128 00:14:44.755589 1585395 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I1128 00:14:44.755792 1585395 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I1128 00:14:44.756158 1585395 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1128 00:14:44.756427 1585395 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1128 00:14:44.756829 1585395 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1128 00:14:44.757073 1585395 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I1128 00:14:44.757220 1585395 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I1128 00:14:44.757353 1585395 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I1128 00:14:44.757540 1585395 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1128 00:14:44.758051 1585395 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	W1128 00:14:45.126253 1585395 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I1128 00:14:45.126419 1585395 cache.go:162] opening:  /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	I1128 00:14:45.128207 1585395 cache.go:162] opening:  /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	I1128 00:14:45.138797 1585395 cache.go:162] opening:  /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W1128 00:14:45.150529 1585395 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I1128 00:14:45.150702 1585395 cache.go:162] opening:  /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	W1128 00:14:45.153331 1585395 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I1128 00:14:45.153431 1585395 cache.go:162] opening:  /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	I1128 00:14:45.178299 1585395 cache.go:162] opening:  /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	I1128 00:14:45.185794 1585395 cache.go:162] opening:  /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	I1128 00:14:45.239723 1585395 cache.go:157] /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1128 00:14:45.239817 1585395 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 485.645199ms
	I1128 00:14:45.239852 1585395 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  1.31 MiB / 287.99 MiB [>_] 0.46% ? p/s ?I1128 00:14:45.605702 1585395 cache.go:157] /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1128 00:14:45.605770 1585395 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 850.883402ms
	I1128 00:14:45.605809 1585395 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  15.81 MiB / 287.99 MiB [>] 5.49% ? p/s ?I1128 00:14:45.742039 1585395 cache.go:157] /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1128 00:14:45.742071 1585395 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 987.673289ms
	I1128 00:14:45.742085 1585395 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.20 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.20 MiB     > gcr.io/k8s-minikube/kicbase...:  25.94 MiB / 287.99 MiB  9.01% 43.20 MiB I1128 00:14:46.462845 1585395 cache.go:157] /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1128 00:14:46.462915 1585395 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.708546127s
	I1128 00:14:46.462944 1585395 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  27.86 MiB / 287.99 MiB  9.67% 40.62 MiB     > gcr.io/k8s-minikube/kicbase...:  39.17 MiB / 287.99 MiB  13.60% 40.62 MiBI1128 00:14:46.804037 1585395 cache.go:157] /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1128 00:14:46.804064 1585395 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 2.04989127s
	I1128 00:14:46.804077 1585395 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  43.96 MiB / 287.99 MiB  15.27% 40.62 MiBI1128 00:14:46.931856 1585395 cache.go:157] /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1128 00:14:46.931883 1585395 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 2.177285275s
	I1128 00:14:46.931896 1585395 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  54.31 MiB / 287.99 MiB  18.86% 40.84 MiB    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 40.84 MiB    > gcr.io/k8s-minikube/kicbase...:  68.90 MiB / 287.99 MiB  23.92% 40.84 MiB    > gcr.io/k8s-minikube/kicbase...:  79.59 MiB / 287.99 MiB  27.64% 40.92 MiB    > gcr.io/k8s-minikube/kicbase...:  91.79 MiB / 287.99 MiB  31.87% 40.92 MiB    > gcr.io/k8s-minikube/kicbase...:  98.68 MiB / 287.99 MiB  34.27% 40.92 MiB    > gcr.io/k8s-minikube/kicbase...:  111.84 MiB / 287.99 MiB  38.83% 41.76 Mi    > gcr.io/k8s-minikube/kicbase...:  127.98 MiB / 287.99 MiB  44.44% 41.76 Mi    > gcr.io/k8s-minikube/kicbase...:  145.31 MiB / 287.99 MiB  50.46% 41.76 Mi    > gcr.io/k8s-minikube/kicbase...:  162.31 MiB / 287.99 MiB  56.36% 44.49 Mi    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.63% 44.49 Mi    > gcr.io/k8s-minikube/kicbase...:  183.40 MiB / 287.99 MiB  63.68% 44.49 Mi    > gcr.io/k8s-minikube/kicbase...:  205.90 MiB / 287.99 MiB  71.
49% 46.29 Mi    > gcr.io/k8s-minikube/kicbase...:  210.01 MiB / 287.99 MiB  72.92% 46.29 Mi    > gcr.io/k8s-minikube/kicbase...:  226.89 MiB / 287.99 MiB  78.78% 46.29 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 46.67 MiI1128 00:14:50.291037 1585395 cache.go:157] /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1128 00:14:50.291068 1585395 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 5.53670005s
	I1128 00:14:50.291088 1585395 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1128 00:14:50.291103 1585395 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  246.06 MiB / 287.99 MiB  85.44% 46.67 Mi    > gcr.io/k8s-minikube/kicbase...:  262.56 MiB / 287.99 MiB  91.17% 46.67 Mi    > gcr.io/k8s-minikube/kicbase...:  265.44 MiB / 287.99 MiB  92.17% 46.70 Mi    > gcr.io/k8s-minikube/kicbase...:  278.08 MiB / 287.99 MiB  96.56% 46.70 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 46.70 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 46.10 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 46.10 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 46.10 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 43.13 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 43.13 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 43.13 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 40.03 MI1128 00:14:52.503372 1585395 cache.go:152] successfully saved gcr.
io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I1128 00:14:52.503384 1585395 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I1128 00:14:52.642120 1585395 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I1128 00:14:52.642159 1585395 cache.go:194] Successfully downloaded all kic artifacts
	I1128 00:14:52.642216 1585395 start.go:365] acquiring machines lock for running-upgrade-372245: {Name:mk097ca066d66a1f9ccd6b7ae4ac45da1a2d8c3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:14:52.642285 1585395 start.go:369] acquired machines lock for "running-upgrade-372245" in 43.643µs
	I1128 00:14:52.642310 1585395 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:14:52.642322 1585395 fix.go:54] fixHost starting: 
	I1128 00:14:52.642592 1585395 cli_runner.go:164] Run: docker container inspect running-upgrade-372245 --format={{.State.Status}}
	I1128 00:14:52.665192 1585395 fix.go:102] recreateIfNeeded on running-upgrade-372245: state=Running err=<nil>
	W1128 00:14:52.665232 1585395 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:14:52.671649 1585395 out.go:177] * Updating the running docker "running-upgrade-372245" container ...
	I1128 00:14:52.673426 1585395 machine.go:88] provisioning docker machine ...
	I1128 00:14:52.673460 1585395 ubuntu.go:169] provisioning hostname "running-upgrade-372245"
	I1128 00:14:52.673540 1585395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-372245
	I1128 00:14:52.703943 1585395 main.go:141] libmachine: Using SSH client type: native
	I1128 00:14:52.704366 1585395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34245 <nil> <nil>}
	I1128 00:14:52.704385 1585395 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-372245 && echo "running-upgrade-372245" | sudo tee /etc/hostname
	I1128 00:14:52.905365 1585395 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-372245
	
	I1128 00:14:52.905456 1585395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-372245
	I1128 00:14:52.940979 1585395 main.go:141] libmachine: Using SSH client type: native
	I1128 00:14:52.941384 1585395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34245 <nil> <nil>}
	I1128 00:14:52.941403 1585395 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-372245' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-372245/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-372245' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:14:53.113216 1585395 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:14:53.113245 1585395 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17206-1455288/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-1455288/.minikube}
	I1128 00:14:53.113277 1585395 ubuntu.go:177] setting up certificates
	I1128 00:14:53.113288 1585395 provision.go:83] configureAuth start
	I1128 00:14:53.113365 1585395 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-372245
	I1128 00:14:53.132842 1585395 provision.go:138] copyHostCerts
	I1128 00:14:53.132909 1585395 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.pem, removing ...
	I1128 00:14:53.132922 1585395 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.pem
	I1128 00:14:53.133003 1585395 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-1455288/.minikube/ca.pem (1078 bytes)
	I1128 00:14:53.133110 1585395 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-1455288/.minikube/cert.pem, removing ...
	I1128 00:14:53.133121 1585395 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-1455288/.minikube/cert.pem
	I1128 00:14:53.133149 1585395 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-1455288/.minikube/cert.pem (1123 bytes)
	I1128 00:14:53.133205 1585395 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-1455288/.minikube/key.pem, removing ...
	I1128 00:14:53.133214 1585395 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-1455288/.minikube/key.pem
	I1128 00:14:53.133239 1585395 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-1455288/.minikube/key.pem (1679 bytes)
	I1128 00:14:53.133282 1585395 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-372245 san=[192.168.82.22 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-372245]
	I1128 00:14:53.584510 1585395 provision.go:172] copyRemoteCerts
	I1128 00:14:53.584583 1585395 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:14:53.584627 1585395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-372245
	I1128 00:14:53.604383 1585395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/running-upgrade-372245/id_rsa Username:docker}
	I1128 00:14:53.704391 1585395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:14:53.729665 1585395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1128 00:14:53.759998 1585395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 00:14:53.785018 1585395 provision.go:86] duration metric: configureAuth took 671.706924ms
	I1128 00:14:53.785044 1585395 ubuntu.go:193] setting minikube options for container-runtime
	I1128 00:14:53.785247 1585395 config.go:182] Loaded profile config "running-upgrade-372245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1128 00:14:53.785360 1585395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-372245
	I1128 00:14:53.804248 1585395 main.go:141] libmachine: Using SSH client type: native
	I1128 00:14:53.804737 1585395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34245 <nil> <nil>}
	I1128 00:14:53.804762 1585395 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:14:54.539366 1585395 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:14:54.539390 1585395 machine.go:91] provisioned docker machine in 1.865939238s
	I1128 00:14:54.539401 1585395 start.go:300] post-start starting for "running-upgrade-372245" (driver="docker")
	I1128 00:14:54.539413 1585395 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:14:54.539498 1585395 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:14:54.539542 1585395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-372245
	I1128 00:14:54.561000 1585395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/running-upgrade-372245/id_rsa Username:docker}
	I1128 00:14:54.658749 1585395 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:14:54.662602 1585395 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1128 00:14:54.662631 1585395 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1128 00:14:54.662644 1585395 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1128 00:14:54.662651 1585395 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1128 00:14:54.662662 1585395 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-1455288/.minikube/addons for local assets ...
	I1128 00:14:54.662721 1585395 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-1455288/.minikube/files for local assets ...
	I1128 00:14:54.662812 1585395 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/14606522.pem -> 14606522.pem in /etc/ssl/certs
	I1128 00:14:54.662926 1585395 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:14:54.671895 1585395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/ssl/certs/14606522.pem --> /etc/ssl/certs/14606522.pem (1708 bytes)
	I1128 00:14:54.695571 1585395 start.go:303] post-start completed in 156.152207ms
	I1128 00:14:54.695654 1585395 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1128 00:14:54.695698 1585395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-372245
	I1128 00:14:54.714613 1585395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/running-upgrade-372245/id_rsa Username:docker}
	I1128 00:14:54.812174 1585395 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1128 00:14:54.818146 1585395 fix.go:56] fixHost completed within 2.175816871s
	I1128 00:14:54.818214 1585395 start.go:83] releasing machines lock for "running-upgrade-372245", held for 2.175914987s
	I1128 00:14:54.818303 1585395 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-372245
	I1128 00:14:54.836803 1585395 ssh_runner.go:195] Run: cat /version.json
	I1128 00:14:54.836860 1585395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-372245
	I1128 00:14:54.837099 1585395 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:14:54.837151 1585395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-372245
	I1128 00:14:54.857139 1585395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/running-upgrade-372245/id_rsa Username:docker}
	I1128 00:14:54.869994 1585395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34245 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/running-upgrade-372245/id_rsa Username:docker}
	W1128 00:14:54.954657 1585395 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1128 00:14:54.954736 1585395 ssh_runner.go:195] Run: systemctl --version
	I1128 00:14:55.042197 1585395 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:14:55.186521 1585395 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1128 00:14:55.192954 1585395 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:14:55.231174 1585395 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1128 00:14:55.231253 1585395 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:14:55.306561 1585395 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 00:14:55.306582 1585395 start.go:472] detecting cgroup driver to use...
	I1128 00:14:55.306612 1585395 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1128 00:14:55.306665 1585395 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:14:55.344999 1585395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:14:55.358056 1585395 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:14:55.358160 1585395 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:14:55.372619 1585395 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:14:55.385497 1585395 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1128 00:14:55.398707 1585395 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1128 00:14:55.398814 1585395 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:14:55.581084 1585395 docker.go:219] disabling docker service ...
	I1128 00:14:55.581193 1585395 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:14:55.600378 1585395 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:14:55.616602 1585395 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:14:55.820264 1585395 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:14:56.009437 1585395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:14:56.025612 1585395 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:14:56.044769 1585395 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1128 00:14:56.044885 1585395 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:14:56.060448 1585395 out.go:177] 
	W1128 00:14:56.062429 1585395 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1128 00:14:56.062451 1585395 out.go:239] * 
	* 
	W1128 00:14:56.063404 1585395 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1128 00:14:56.065814 1585395 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-372245 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-11-28 00:14:56.095020507 +0000 UTC m=+2707.313854717
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-372245
helpers_test.go:235: (dbg) docker inspect running-upgrade-372245:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "018674985c9e71ad8beaadb1a969faa80912f98a606301c6c89aa4b1ad6267c7",
	        "Created": "2023-11-28T00:13:54.801619202Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1582042,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-28T00:13:55.471681675Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/018674985c9e71ad8beaadb1a969faa80912f98a606301c6c89aa4b1ad6267c7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/018674985c9e71ad8beaadb1a969faa80912f98a606301c6c89aa4b1ad6267c7/hostname",
	        "HostsPath": "/var/lib/docker/containers/018674985c9e71ad8beaadb1a969faa80912f98a606301c6c89aa4b1ad6267c7/hosts",
	        "LogPath": "/var/lib/docker/containers/018674985c9e71ad8beaadb1a969faa80912f98a606301c6c89aa4b1ad6267c7/018674985c9e71ad8beaadb1a969faa80912f98a606301c6c89aa4b1ad6267c7-json.log",
	        "Name": "/running-upgrade-372245",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-372245:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-372245",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/edadbc1cc9ec97a683dc1da33b92ea1deb5718bc2e0f236ebb449d236e4d61de-init/diff:/var/lib/docker/overlay2/be89b1a642a647736fe4777a54975f65d2b852c408ffd505ff1ad0ee53670c8b/diff:/var/lib/docker/overlay2/d4fc654d629fd8c7c78e397fe8c5de839625336a2b8844e8efefc6a243570c62/diff:/var/lib/docker/overlay2/cc52b963cea2003555382d7367089836676f8e2131b87937819226d9caff3459/diff:/var/lib/docker/overlay2/941f5079f0fe24d645d14d84277bced595310b70291d98290932fc9ba55ee8e4/diff:/var/lib/docker/overlay2/0dd25be046e71d814b93c0721424ec6f74370fb3fb3fb81f1e6235a061d9eed6/diff:/var/lib/docker/overlay2/395909bca833dc7b0cd310b4b58445641497771f697bd3d3b79a67508ca9448e/diff:/var/lib/docker/overlay2/0df8e9f904960f11645a4d5ad3fb1b56afaa022b83ce1c28c4e9a439011d325f/diff:/var/lib/docker/overlay2/861f45f509d6550f7eaab835ee7278ca7842d7980ae7185723bda584984fddaa/diff:/var/lib/docker/overlay2/0fc85348e6051ff3b0f65382578c60ab23e7d706ab997f4cd2d8c2e11c281ec1/diff:/var/lib/docker/overlay2/59188b
e36a07eaf04b60a8be579b5463e1a0ad343d1004b7b2805f7570913c68/diff:/var/lib/docker/overlay2/ddc15eb3eb1dde50a71334ba4e37f4120231a06d3451d75cb1382a565c73a242/diff:/var/lib/docker/overlay2/59138126445eef8c286edff1c97a1801a36b83c651894777bd3df8c35ff50cdf/diff:/var/lib/docker/overlay2/fcf045277ee1e42833c6409af7e03a0252aa2079f49585ce97ae472fd77b2918/diff:/var/lib/docker/overlay2/f41e7a63e807743274dd9066a8d0a08e43280b5ecaaaf845d3fdf5dbecaf62da/diff:/var/lib/docker/overlay2/fd3f9b5406e380f8c80876d9cbd70cf75ba90152a32b99cb2a584eb06bd04354/diff:/var/lib/docker/overlay2/f5ab162e5260d272456cbcad893457c0b897c832bb2b9d54af477cb1fee99788/diff:/var/lib/docker/overlay2/aacbd4773c3c0057c51d9c93eafe834721e8dbd6f65ee19a0a21d009dc219b5a/diff:/var/lib/docker/overlay2/788bec70c7a54177ede92cde4d217606c26af06a01568c755719d8dadb3664a4/diff:/var/lib/docker/overlay2/30c60b23e41d397a6b900bb18309f0201de2ed0cf8fd4c101f973476cd2a2e3d/diff:/var/lib/docker/overlay2/71a9e815fb66d44369755416a846df7559b060d56ad5e1f5cff9896985d95ff2/diff:/var/lib/d
ocker/overlay2/87c55e1a92915415bf1ce4e2a0f8b95648c9538ebb8a99b2beb392e5bc8f6b04/diff:/var/lib/docker/overlay2/856a64d049fbb5297e1545cadfca54fe53fcf82927b67741837bb1004fa1ce64/diff:/var/lib/docker/overlay2/9a5af96375cf210ca9502ee1fd3ed8e45f8cf276cebb75b4c838b6cf4b8bdedd/diff:/var/lib/docker/overlay2/f50adc28d2b8d31772f577952247d1fc58b041953b86c8721f32e4166325a6d9/diff:/var/lib/docker/overlay2/598ed6360036476c0564b453e7d80ffa450440bb29422505309ee45e2fd34802/diff:/var/lib/docker/overlay2/03a38c2a9c289d88e12931da2846c2587578844cdbdeae6967d0f82a33d2f7f2/diff:/var/lib/docker/overlay2/0848375e2519bac7b1100a1f382394b77828c2d3ce46eb0fbc5f038246a59c6f/diff:/var/lib/docker/overlay2/52c8b5315f0a099fd55bff30bf87f2ce8f617381fecbe75755cbd15fe5c09990/diff:/var/lib/docker/overlay2/cfa02b583e13cfb39ebc5d94665d77dcae1c284171691f40eb263f1d7e6c679e/diff:/var/lib/docker/overlay2/6abdd96da9bc9961f17d1c043ec4e979d2913f274869dc4161bdc44f7906addf/diff:/var/lib/docker/overlay2/1b608a88ec6bca94ac84f1b3e21be68c6ba07e8edff5589697b591126f7
2a5c8/diff:/var/lib/docker/overlay2/4028de82c034093fdcf276854ab5e5f580a86b84d0fbc64dc90b0f95335ae534/diff:/var/lib/docker/overlay2/1ccd252a3386ec6a047050d5d687b9618eed5a34b1f150779ef8633be9fbcc46/diff:/var/lib/docker/overlay2/9af2077ad06d16ab8232b6f7d38995198e8d35cbaec52cc53f5c6ac0ab83e046/diff",
	                "MergedDir": "/var/lib/docker/overlay2/edadbc1cc9ec97a683dc1da33b92ea1deb5718bc2e0f236ebb449d236e4d61de/merged",
	                "UpperDir": "/var/lib/docker/overlay2/edadbc1cc9ec97a683dc1da33b92ea1deb5718bc2e0f236ebb449d236e4d61de/diff",
	                "WorkDir": "/var/lib/docker/overlay2/edadbc1cc9ec97a683dc1da33b92ea1deb5718bc2e0f236ebb449d236e4d61de/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-372245",
	                "Source": "/var/lib/docker/volumes/running-upgrade-372245/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-372245",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-372245",
	                "name.minikube.sigs.k8s.io": "running-upgrade-372245",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0760a0a86d7bda470d4c4127fe9abcbae06220ffa92c9b330f4465427c9b9d3b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34245"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34244"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34243"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34242"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0760a0a86d7b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-372245": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.82.22"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "018674985c9e",
	                        "running-upgrade-372245"
	                    ],
	                    "NetworkID": "a56da9e175042fa224a8e66b9aa26626914c31543287f9cfcafff3a72b9d7c2d",
	                    "EndpointID": "c683fc459f4f28596a21ab57c8334c88e8f754d8caf3387d7b7ce7129a46203c",
	                    "Gateway": "192.168.82.1",
	                    "IPAddress": "192.168.82.22",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:52:16",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-372245 -n running-upgrade-372245
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-372245 -n running-upgrade-372245: exit status 4 (396.685802ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:14:56.452186 1586553 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-372245" does not appear in /home/jenkins/minikube-integration/17206-1455288/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-372245" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-372245" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-372245
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-372245: (3.221695102s)
--- FAIL: TestRunningBinaryUpgrade (96.64s)

                                                
                                    
x
+
TestMissingContainerUpgrade (464.39s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.4079557454.exe start -p missing-upgrade-670974 --memory=2200 --driver=docker  --container-runtime=crio
E1128 00:05:46.163047 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Non-zero exit: /tmp/minikube-v1.17.0.4079557454.exe start -p missing-upgrade-670974 --memory=2200 --driver=docker  --container-runtime=crio: exit status 80 (51.55596506s)

                                                
                                                
-- stdout --
	* [missing-upgrade-670974] minikube v1.17.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-1455288/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* minikube 1.32.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.32.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	* Using the docker driver based on user configuration
	* Starting control plane node missing-upgrade-670974 in cluster missing-upgrade-670974
	* Pulling base image ...
	* Downloading Kubernetes v1.20.2 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW[
K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
[K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v8-v1....: 4.17 MiB / 579.81 MiB [>__] 0.72% ? p/s ?    > preloaded-images-k8s-v8-v1....: 8.48 MiB / 579.81 MiB [>__] 1.46% ? p/s ?    > preloaded-images-k8s-v8-v1....: 14.34 MiB / 579.81 MiB [>_] 2.47% ? p/s ?    > preloaded-images-k8s-v8-v1....: 19.26 MiB / 579.81 MiB  3.32% 25.15 MiB p    > preloaded-images-k8s-v8-v1....: 24.20 MiB / 579.81 MiB  4.17% 25.15 MiB p    > preloaded-images-k8s-v8-v1....: 30.50 MiB / 579.81 MiB  5.26% 25.15 MiB p    > preloaded-images-k8s-v8-v1....: 37.12 MiB / 579.81 MiB  6.40% 25.45 MiB p    > preloaded-images-k8s-v8-v1....: 40.72 MiB / 579.81 MiB  7.02% 25.45 MiB p    > preloaded-images-k8s-v8-v1....: 46.60 MiB / 579.81 MiB  8.04% 25.45 MiB p    > preloaded-images-k8s-v8-v1....: 56.00 MiB / 579.81 MiB  9.66% 25.84 MiB p    > preloaded-images-k8s-v8-v1....: 63.39 MiB / 579.81 MiB  10.93% 25.84 MiB     > preloaded-images-k8s-v8-v1....: 64.11 MiB / 579.81 MiB  11.06% 25.84 MiB     > preloaded-images-k8s-v8-v1....: 72.00 MiB / 579.81 MiB  12.42
% 25.89 MiB     > preloaded-images-k8s-v8-v1....: 75.64 MiB / 579.81 MiB  13.05% 25.89 MiB     > preloaded-images-k8s-v8-v1....: 82.33 MiB / 579.81 MiB  14.20% 25.89 MiB     > preloaded-images-k8s-v8-v1....: 88.25 MiB / 579.81 MiB  15.22% 25.97 MiB     > preloaded-images-k8s-v8-v1....: 94.06 MiB / 579.81 MiB  16.22% 25.97 MiB     > preloaded-images-k8s-v8-v1....: 96.62 MiB / 579.81 MiB  16.66% 25.97 MiB     > preloaded-images-k8s-v8-v1....: 104.00 MiB / 579.81 MiB  17.94% 25.99 MiB    > preloaded-images-k8s-v8-v1....: 111.48 MiB / 579.81 MiB  19.23% 25.99 MiB    > preloaded-images-k8s-v8-v1....: 112.78 MiB / 579.81 MiB  19.45% 25.99 MiB    > preloaded-images-k8s-v8-v1....: 120.00 MiB / 579.81 MiB  20.70% 26.03 MiB    > preloaded-images-k8s-v8-v1....: 128.69 MiB / 579.81 MiB  22.19% 26.03 MiB    > preloaded-images-k8s-v8-v1....: 133.20 MiB / 579.81 MiB  22.97% 26.03 MiB    > preloaded-images-k8s-v8-v1....: 137.71 MiB / 579.81 MiB  23.75% 26.26 MiB    > preloaded-images-k8s-v8-v1....: 145.58 MiB / 579.81 MiB  2
5.11% 26.26 MiB    > preloaded-images-k8s-v8-v1....: 150.53 MiB / 579.81 MiB  25.96% 26.26 MiB    > preloaded-images-k8s-v8-v1....: 157.64 MiB / 579.81 MiB  27.19% 26.70 MiB    > preloaded-images-k8s-v8-v1....: 160.61 MiB / 579.81 MiB  27.70% 26.70 MiB    > preloaded-images-k8s-v8-v1....: 170.06 MiB / 579.81 MiB  29.33% 26.70 MiB    > preloaded-images-k8s-v8-v1....: 178.73 MiB / 579.81 MiB  30.83% 27.25 MiB    > preloaded-images-k8s-v8-v1....: 184.00 MiB / 579.81 MiB  31.73% 27.25 MiB    > preloaded-images-k8s-v8-v1....: 190.70 MiB / 579.81 MiB  32.89% 27.25 MiB    > preloaded-images-k8s-v8-v1....: 192.00 MiB / 579.81 MiB  33.11% 26.92 MiB    > preloaded-images-k8s-v8-v1....: 200.00 MiB / 579.81 MiB  34.49% 26.92 MiB    > preloaded-images-k8s-v8-v1....: 206.96 MiB / 579.81 MiB  35.70% 26.92 MiB    > preloaded-images-k8s-v8-v1....: 212.59 MiB / 579.81 MiB  36.67% 27.40 MiB    > preloaded-images-k8s-v8-v1....: 216.28 MiB / 579.81 MiB  37.30% 27.40 MiB    > preloaded-images-k8s-v8-v1....: 224.00 MiB / 579.81 MiB
38.63% 27.40 MiB    > preloaded-images-k8s-v8-v1....: 232.00 MiB / 579.81 MiB  40.01% 27.71 MiB    > preloaded-images-k8s-v8-v1....: 240.94 MiB / 579.81 MiB  41.55% 27.71 MiB    > preloaded-images-k8s-v8-v1....: 248.00 MiB / 579.81 MiB  42.77% 27.71 MiB    > preloaded-images-k8s-v8-v1....: 251.83 MiB / 579.81 MiB  43.43% 28.06 MiB    > preloaded-images-k8s-v8-v1....: 256.41 MiB / 579.81 MiB  44.22% 28.06 MiB    > preloaded-images-k8s-v8-v1....: 264.00 MiB / 579.81 MiB  45.53% 28.06 MiB    > preloaded-images-k8s-v8-v1....: 270.29 MiB / 579.81 MiB  46.62% 28.23 MiB    > preloaded-images-k8s-v8-v1....: 272.81 MiB / 579.81 MiB  47.05% 28.23 MiB    > preloaded-images-k8s-v8-v1....: 280.00 MiB / 579.81 MiB  48.29% 28.23 MiB    > preloaded-images-k8s-v8-v1....: 285.87 MiB / 579.81 MiB  49.30% 28.08 MiB    > preloaded-images-k8s-v8-v1....: 288.70 MiB / 579.81 MiB  49.79% 28.08 MiB    > preloaded-images-k8s-v8-v1....: 296.05 MiB / 579.81 MiB  51.06% 28.08 MiB    > preloaded-images-k8s-v8-v1....: 302.21 MiB / 579.81
MiB  52.12% 28.03 MiB    > preloaded-images-k8s-v8-v1....: 307.70 MiB / 579.81 MiB  53.07% 28.03 MiB    > preloaded-images-k8s-v8-v1....: 312.00 MiB / 579.81 MiB  53.81% 28.03 MiB    > preloaded-images-k8s-v8-v1....: 317.98 MiB / 579.81 MiB  54.84% 27.92 MiB    > preloaded-images-k8s-v8-v1....: 325.85 MiB / 579.81 MiB  56.20% 27.92 MiB    > preloaded-images-k8s-v8-v1....: 329.09 MiB / 579.81 MiB  56.76% 27.92 MiB    > preloaded-images-k8s-v8-v1....: 336.25 MiB / 579.81 MiB  57.99% 28.08 MiB    > preloaded-images-k8s-v8-v1....: 344.00 MiB / 579.81 MiB  59.33% 28.08 MiB    > preloaded-images-k8s-v8-v1....: 348.34 MiB / 579.81 MiB  60.08% 28.08 MiB    > preloaded-images-k8s-v8-v1....: 353.57 MiB / 579.81 MiB  60.98% 28.13 MiB    > preloaded-images-k8s-v8-v1....: 360.30 MiB / 579.81 MiB  62.14% 28.13 MiB    > preloaded-images-k8s-v8-v1....: 368.23 MiB / 579.81 MiB  63.51% 28.13 MiB    > preloaded-images-k8s-v8-v1....: 373.65 MiB / 579.81 MiB  64.44% 28.48 MiB    > preloaded-images-k8s-v8-v1....: 381.11 MiB / 579.
81 MiB  65.73% 28.48 MiB    > preloaded-images-k8s-v8-v1....: 388.02 MiB / 579.81 MiB  66.92% 28.48 MiB    > preloaded-images-k8s-v8-v1....: 396.19 MiB / 579.81 MiB  68.33% 29.06 MiB    > preloaded-images-k8s-v8-v1....: 405.96 MiB / 579.81 MiB  70.02% 29.06 MiB    > preloaded-images-k8s-v8-v1....: 412.54 MiB / 579.81 MiB  71.15% 29.06 MiB    > preloaded-images-k8s-v8-v1....: 424.00 MiB / 579.81 MiB  73.13% 30.18 MiB    > preloaded-images-k8s-v8-v1....: 430.03 MiB / 579.81 MiB  74.17% 30.18 MiB    > preloaded-images-k8s-v8-v1....: 436.37 MiB / 579.81 MiB  75.26% 30.18 MiB    > preloaded-images-k8s-v8-v1....: 445.56 MiB / 579.81 MiB  76.85% 30.55 MiB    > preloaded-images-k8s-v8-v1....: 453.59 MiB / 579.81 MiB  78.23% 30.55 MiB    > preloaded-images-k8s-v8-v1....: 461.33 MiB / 579.81 MiB  79.57% 30.55 MiB    > preloaded-images-k8s-v8-v1....: 467.95 MiB / 579.81 MiB  80.71% 30.99 MiB    > preloaded-images-k8s-v8-v1....: 474.86 MiB / 579.81 MiB  81.90% 30.99 MiB    > preloaded-images-k8s-v8-v1....: 483.70 MiB / 5
79.81 MiB  83.42% 30.99 MiB    > preloaded-images-k8s-v8-v1....: 488.55 MiB / 579.81 MiB  84.26% 31.20 MiB    > preloaded-images-k8s-v8-v1....: 496.36 MiB / 579.81 MiB  85.61% 31.20 MiB    > preloaded-images-k8s-v8-v1....: 504.31 MiB / 579.81 MiB  86.98% 31.20 MiB    > preloaded-images-k8s-v8-v1....: 512.00 MiB / 579.81 MiB  88.30% 31.71 MiB    > preloaded-images-k8s-v8-v1....: 517.81 MiB / 579.81 MiB  89.31% 31.71 MiB    > preloaded-images-k8s-v8-v1....: 525.76 MiB / 579.81 MiB  90.68% 31.71 MiB    > preloaded-images-k8s-v8-v1....: 533.45 MiB / 579.81 MiB  92.00% 31.97 MiB    > preloaded-images-k8s-v8-v1....: 541.85 MiB / 579.81 MiB  93.45% 31.97 MiB    > preloaded-images-k8s-v8-v1....: 544.33 MiB / 579.81 MiB  93.88% 31.97 MiB    > preloaded-images-k8s-v8-v1....: 547.98 MiB / 579.81 MiB  94.51% 31.47 MiB    > preloaded-images-k8s-v8-v1....: 555.73 MiB / 579.81 MiB  95.85% 31.47 MiB    > preloaded-images-k8s-v8-v1....: 562.06 MiB / 579.81 MiB  96.94% 31.47 MiB    > preloaded-images-k8s-v8-v1....: 570.45 MiB
/ 579.81 MiB  98.39% 31.86 MiB    > preloaded-images-k8s-v8-v1....: 579.81 MiB / 579.81 MiB  100.00% 31.88 MiX Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.4079557454.exe start -p missing-upgrade-670974 --memory=2200 --driver=docker  --container-runtime=crio
E1128 00:06:33.164184 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Non-zero exit: /tmp/minikube-v1.17.0.4079557454.exe start -p missing-upgrade-670974 --memory=2200 --driver=docker  --container-runtime=crio: exit status 80 (3m22.693911226s)

                                                
                                                
-- stdout --
	* [missing-upgrade-670974] minikube v1.17.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-1455288/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-670974 in cluster missing-upgrade-670974
	* Pulling base image ...
	* docker "missing-upgrade-670974" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.4079557454.exe start -p missing-upgrade-670974 --memory=2200 --driver=docker  --container-runtime=crio
E1128 00:11:23.460969 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
E1128 00:11:33.163519 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Non-zero exit: /tmp/minikube-v1.17.0.4079557454.exe start -p missing-upgrade-670974 --memory=2200 --driver=docker  --container-runtime=crio: exit status 80 (3m22.319941911s)

                                                
                                                
-- stdout --
	* [missing-upgrade-670974] minikube v1.17.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-1455288/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-670974 in cluster missing-upgrade-670974
	* Pulling base image ...
	* docker "missing-upgrade-670974" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:328: release start failed: exit status 80
panic.go:523: *** TestMissingContainerUpgrade FAILED at 2023-11-28 00:13:18.328517025 +0000 UTC m=+2609.547351234
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-670974
helpers_test.go:235: (dbg) docker inspect missing-upgrade-670974:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b5fcfcfe49321f2e1d70ea5908c4f09e2f9335ee0937adb6828baf9bc3875fe",
	        "Created": "2023-11-28T00:13:10.188772733Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "created",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 128,
	            "Error": "Address already in use",
	            "StartedAt": "0001-01-01T00:00:00Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "",
	        "HostnamePath": "",
	        "HostsPath": "",
	        "LogPath": "",
	        "Name": "/missing-upgrade-670974",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-670974:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-670974",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e476e0223375d110b70354045bdee1dac50dd8884d52de841ea41087faa978b5-init/diff:/var/lib/docker/overlay2/be89b1a642a647736fe4777a54975f65d2b852c408ffd505ff1ad0ee53670c8b/diff:/var/lib/docker/overlay2/d4fc654d629fd8c7c78e397fe8c5de839625336a2b8844e8efefc6a243570c62/diff:/var/lib/docker/overlay2/cc52b963cea2003555382d7367089836676f8e2131b87937819226d9caff3459/diff:/var/lib/docker/overlay2/941f5079f0fe24d645d14d84277bced595310b70291d98290932fc9ba55ee8e4/diff:/var/lib/docker/overlay2/0dd25be046e71d814b93c0721424ec6f74370fb3fb3fb81f1e6235a061d9eed6/diff:/var/lib/docker/overlay2/395909bca833dc7b0cd310b4b58445641497771f697bd3d3b79a67508ca9448e/diff:/var/lib/docker/overlay2/0df8e9f904960f11645a4d5ad3fb1b56afaa022b83ce1c28c4e9a439011d325f/diff:/var/lib/docker/overlay2/861f45f509d6550f7eaab835ee7278ca7842d7980ae7185723bda584984fddaa/diff:/var/lib/docker/overlay2/0fc85348e6051ff3b0f65382578c60ab23e7d706ab997f4cd2d8c2e11c281ec1/diff:/var/lib/docker/overlay2/59188b
e36a07eaf04b60a8be579b5463e1a0ad343d1004b7b2805f7570913c68/diff:/var/lib/docker/overlay2/ddc15eb3eb1dde50a71334ba4e37f4120231a06d3451d75cb1382a565c73a242/diff:/var/lib/docker/overlay2/59138126445eef8c286edff1c97a1801a36b83c651894777bd3df8c35ff50cdf/diff:/var/lib/docker/overlay2/fcf045277ee1e42833c6409af7e03a0252aa2079f49585ce97ae472fd77b2918/diff:/var/lib/docker/overlay2/f41e7a63e807743274dd9066a8d0a08e43280b5ecaaaf845d3fdf5dbecaf62da/diff:/var/lib/docker/overlay2/fd3f9b5406e380f8c80876d9cbd70cf75ba90152a32b99cb2a584eb06bd04354/diff:/var/lib/docker/overlay2/f5ab162e5260d272456cbcad893457c0b897c832bb2b9d54af477cb1fee99788/diff:/var/lib/docker/overlay2/aacbd4773c3c0057c51d9c93eafe834721e8dbd6f65ee19a0a21d009dc219b5a/diff:/var/lib/docker/overlay2/788bec70c7a54177ede92cde4d217606c26af06a01568c755719d8dadb3664a4/diff:/var/lib/docker/overlay2/30c60b23e41d397a6b900bb18309f0201de2ed0cf8fd4c101f973476cd2a2e3d/diff:/var/lib/docker/overlay2/71a9e815fb66d44369755416a846df7559b060d56ad5e1f5cff9896985d95ff2/diff:/var/lib/d
ocker/overlay2/87c55e1a92915415bf1ce4e2a0f8b95648c9538ebb8a99b2beb392e5bc8f6b04/diff:/var/lib/docker/overlay2/856a64d049fbb5297e1545cadfca54fe53fcf82927b67741837bb1004fa1ce64/diff:/var/lib/docker/overlay2/9a5af96375cf210ca9502ee1fd3ed8e45f8cf276cebb75b4c838b6cf4b8bdedd/diff:/var/lib/docker/overlay2/f50adc28d2b8d31772f577952247d1fc58b041953b86c8721f32e4166325a6d9/diff:/var/lib/docker/overlay2/598ed6360036476c0564b453e7d80ffa450440bb29422505309ee45e2fd34802/diff:/var/lib/docker/overlay2/03a38c2a9c289d88e12931da2846c2587578844cdbdeae6967d0f82a33d2f7f2/diff:/var/lib/docker/overlay2/0848375e2519bac7b1100a1f382394b77828c2d3ce46eb0fbc5f038246a59c6f/diff:/var/lib/docker/overlay2/52c8b5315f0a099fd55bff30bf87f2ce8f617381fecbe75755cbd15fe5c09990/diff:/var/lib/docker/overlay2/cfa02b583e13cfb39ebc5d94665d77dcae1c284171691f40eb263f1d7e6c679e/diff:/var/lib/docker/overlay2/6abdd96da9bc9961f17d1c043ec4e979d2913f274869dc4161bdc44f7906addf/diff:/var/lib/docker/overlay2/1b608a88ec6bca94ac84f1b3e21be68c6ba07e8edff5589697b591126f7
2a5c8/diff:/var/lib/docker/overlay2/4028de82c034093fdcf276854ab5e5f580a86b84d0fbc64dc90b0f95335ae534/diff:/var/lib/docker/overlay2/1ccd252a3386ec6a047050d5d687b9618eed5a34b1f150779ef8633be9fbcc46/diff:/var/lib/docker/overlay2/9af2077ad06d16ab8232b6f7d38995198e8d35cbaec52cc53f5c6ac0ab83e046/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e476e0223375d110b70354045bdee1dac50dd8884d52de841ea41087faa978b5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e476e0223375d110b70354045bdee1dac50dd8884d52de841ea41087faa978b5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e476e0223375d110b70354045bdee1dac50dd8884d52de841ea41087faa978b5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-670974",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-670974/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-670974",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-670974",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-670974",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-670974": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.59.255"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2b5fcfcfe493",
	                        "missing-upgrade-670974"
	                    ],
	                    "NetworkID": "",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-670974 -n missing-upgrade-670974
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-670974 -n missing-upgrade-670974: exit status 7 (100.978586ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "missing-upgrade-670974" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "missing-upgrade-670974" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-670974
E1128 00:13:20.412062 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-670974: (4.643545235s)
--- FAIL: TestMissingContainerUpgrade (464.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (2069.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.1095788440.exe start -p stopped-upgrade-714093 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Non-zero exit: /tmp/minikube-v1.17.0.1095788440.exe start -p stopped-upgrade-714093 --memory=2200 --vm-driver=docker  --container-runtime=crio: exit status 109 (8m45.471109342s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-714093] minikube v1.17.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig1756268973
	* Using the docker driver based on user configuration
	* Starting control plane node stopped-upgrade-714093 in cluster stopped-upgrade-714093
	* Pulling base image ...
	* Downloading Kubernetes v1.20.2 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
	* Preparing Kubernetes v1.20.2 on CRI-O 1.19.1 ...
	  - Generating certificates and keys ...| WW
	  - Booting up control plane .../ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
	  - Generating certificates and keys ...\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
	  - Booting up control plane .../ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ 
WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| W
W/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
[K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW[
K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW-
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ 
WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| W
W/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
[K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW[
K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW-
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ 
WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| W
W/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
[K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW[
K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW-
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ 
WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| W
W/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
[K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW[
K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW-
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ 
WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| W
W/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
[K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v8-v1....: 5.42 MiB / 579.81 MiB [>__] 0.93% ? p/s ?    > preloaded-images-k8s-v8-v1....: 8.52 MiB / 579.81 MiB [>__] 1.47% ? p/s ?    > preloaded-images-k8s-v8-v1....: 14.48 MiB / 579.81 MiB [>_] 2.50% ? p/s ?    > preloaded-images-k8s-v8-v1....: 18.55 MiB / 579.81 MiB  3.20% 21.87 MiB p    > preloaded-images-k8s-v8-v1....: 24.80 MiB / 579.81 MiB  4.28% 21.87 MiB p    > preloaded-images-k8s-v8-v1....: 31.01 MiB / 579.81 MiB  5.35% 21.87 MiB p    > preloaded-images-k8s-v8-v1....: 35.95 MiB / 579.81 MiB  6.20% 22.33 MiB p    > preloaded-images-k8s-v8-v1....: 43.50 MiB / 579.81 MiB  7.50% 22.33 MiB p    > preloaded-images-k8s-v8-v1....: 49.62 MiB / 579.81 MiB  8.56% 22.33 MiB p    > preloaded-images-k8s-v8-v1....: 56.87 MiB / 579.81 MiB  9.81% 23.14 MiB p    > preloaded-images-k8s-v8-v1....: 59.77 MiB / 579.81 MiB  10.31% 23.14 MiB     > preloaded-images-k8s-v8-v1....: 66.36 MiB / 579.81 MiB  11.45% 23.14 MiB     > preloaded-images-k8s-v8-v1....: 72.08 MiB / 579.81 MiB  12.43
% 23.28 MiB     > preloaded-images-k8s-v8-v1....: 76.25 MiB / 579.81 MiB  13.15% 23.28 MiB     > preloaded-images-k8s-v8-v1....: 83.99 MiB / 579.81 MiB  14.49% 23.28 MiB     > preloaded-images-k8s-v8-v1....: 90.18 MiB / 579.81 MiB  15.55% 23.73 MiB     > preloaded-images-k8s-v8-v1....: 93.95 MiB / 579.81 MiB  16.20% 23.73 MiB     > preloaded-images-k8s-v8-v1....: 99.63 MiB / 579.81 MiB  17.18% 23.73 MiB     > preloaded-images-k8s-v8-v1....: 103.76 MiB / 579.81 MiB  17.90% 23.66 MiB    > preloaded-images-k8s-v8-v1....: 110.51 MiB / 579.81 MiB  19.06% 23.66 MiB    > preloaded-images-k8s-v8-v1....: 115.02 MiB / 579.81 MiB  19.84% 23.66 MiB    > preloaded-images-k8s-v8-v1....: 120.70 MiB / 579.81 MiB  20.82% 23.95 MiB    > preloaded-images-k8s-v8-v1....: 131.00 MiB / 579.81 MiB  22.59% 23.95 MiB    > preloaded-images-k8s-v8-v1....: 134.78 MiB / 579.81 MiB  23.24% 23.95 MiB    > preloaded-images-k8s-v8-v1....: 138.21 MiB / 579.81 MiB  23.84% 24.29 MiB    > preloaded-images-k8s-v8-v1....: 143.85 MiB / 579.81 MiB  2
4.81% 24.29 MiB    > preloaded-images-k8s-v8-v1....: 153.32 MiB / 579.81 MiB  26.44% 24.29 MiB    > preloaded-images-k8s-v8-v1....: 158.28 MiB / 579.81 MiB  27.30% 24.88 MiB    > preloaded-images-k8s-v8-v1....: 165.01 MiB / 579.81 MiB  28.46% 24.88 MiB    > preloaded-images-k8s-v8-v1....: 171.67 MiB / 579.81 MiB  29.61% 24.88 MiB    > preloaded-images-k8s-v8-v1....: 178.74 MiB / 579.81 MiB  30.83% 25.48 MiB    > preloaded-images-k8s-v8-v1....: 185.90 MiB / 579.81 MiB  32.06% 25.48 MiB    > preloaded-images-k8s-v8-v1....: 187.56 MiB / 579.81 MiB  32.35% 25.48 MiB    > preloaded-images-k8s-v8-v1....: 194.49 MiB / 579.81 MiB  33.54% 25.53 MiB    > preloaded-images-k8s-v8-v1....: 200.55 MiB / 579.81 MiB  34.59% 25.53 MiB    > preloaded-images-k8s-v8-v1....: 203.34 MiB / 579.81 MiB  35.07% 25.53 MiB    > preloaded-images-k8s-v8-v1....: 209.93 MiB / 579.81 MiB  36.21% 25.54 MiB    > preloaded-images-k8s-v8-v1....: 213.23 MiB / 579.81 MiB  36.78% 25.54 MiB    > preloaded-images-k8s-v8-v1....: 222.69 MiB / 579.81 MiB
38.41% 25.54 MiB    > preloaded-images-k8s-v8-v1....: 228.56 MiB / 579.81 MiB  39.42% 25.89 MiB    > preloaded-images-k8s-v8-v1....: 236.91 MiB / 579.81 MiB  40.86% 25.89 MiB    > preloaded-images-k8s-v8-v1....: 242.82 MiB / 579.81 MiB  41.88% 25.89 MiB    > preloaded-images-k8s-v8-v1....: 252.89 MiB / 579.81 MiB  43.62% 26.84 MiB    > preloaded-images-k8s-v8-v1....: 260.65 MiB / 579.81 MiB  44.95% 26.84 MiB    > preloaded-images-k8s-v8-v1....: 263.18 MiB / 579.81 MiB  45.39% 26.84 MiB    > preloaded-images-k8s-v8-v1....: 268.52 MiB / 579.81 MiB  46.31% 26.79 MiB    > preloaded-images-k8s-v8-v1....: 273.63 MiB / 579.81 MiB  47.19% 26.79 MiB    > preloaded-images-k8s-v8-v1....: 282.43 MiB / 579.81 MiB  48.71% 26.79 MiB    > preloaded-images-k8s-v8-v1....: 289.47 MiB / 579.81 MiB  49.93% 27.31 MiB    > preloaded-images-k8s-v8-v1....: 290.14 MiB / 579.81 MiB  50.04% 27.31 MiB    > preloaded-images-k8s-v8-v1....: 296.84 MiB / 579.81 MiB  51.20% 27.31 MiB    > preloaded-images-k8s-v8-v1....: 302.68 MiB / 579.81
MiB  52.20% 26.97 MiB    > preloaded-images-k8s-v8-v1....: 308.55 MiB / 579.81 MiB  53.22% 26.97 MiB    > preloaded-images-k8s-v8-v1....: 316.05 MiB / 579.81 MiB  54.51% 26.97 MiB    > preloaded-images-k8s-v8-v1....: 321.38 MiB / 579.81 MiB  55.43% 27.24 MiB    > preloaded-images-k8s-v8-v1....: 328.85 MiB / 579.81 MiB  56.72% 27.24 MiB    > preloaded-images-k8s-v8-v1....: 337.51 MiB / 579.81 MiB  58.21% 27.24 MiB    > preloaded-images-k8s-v8-v1....: 343.33 MiB / 579.81 MiB  59.21% 27.84 MiB    > preloaded-images-k8s-v8-v1....: 348.07 MiB / 579.81 MiB  60.03% 27.84 MiB    > preloaded-images-k8s-v8-v1....: 353.27 MiB / 579.81 MiB  60.93% 27.84 MiB    > preloaded-images-k8s-v8-v1....: 361.53 MiB / 579.81 MiB  62.35% 28.00 MiB    > preloaded-images-k8s-v8-v1....: 367.62 MiB / 579.81 MiB  63.40% 28.00 MiB    > preloaded-images-k8s-v8-v1....: 373.09 MiB / 579.81 MiB  64.35% 28.00 MiB    > preloaded-images-k8s-v8-v1....: 377.91 MiB / 579.81 MiB  65.18% 27.96 MiB    > preloaded-images-k8s-v8-v1....: 381.01 MiB / 579.
81 MiB  65.71% 27.96 MiB    > preloaded-images-k8s-v8-v1....: 385.19 MiB / 579.81 MiB  66.43% 27.96 MiB    > preloaded-images-k8s-v8-v1....: 393.64 MiB / 579.81 MiB  67.89% 27.85 MiB    > preloaded-images-k8s-v8-v1....: 399.28 MiB / 579.81 MiB  68.86% 27.85 MiB    > preloaded-images-k8s-v8-v1....: 405.35 MiB / 579.81 MiB  69.91% 27.85 MiB    > preloaded-images-k8s-v8-v1....: 410.85 MiB / 579.81 MiB  70.86% 27.90 MiB    > preloaded-images-k8s-v8-v1....: 418.09 MiB / 579.81 MiB  72.11% 27.90 MiB    > preloaded-images-k8s-v8-v1....: 427.96 MiB / 579.81 MiB  73.81% 27.90 MiB    > preloaded-images-k8s-v8-v1....: 433.12 MiB / 579.81 MiB  74.70% 28.50 MiB    > preloaded-images-k8s-v8-v1....: 440.40 MiB / 579.81 MiB  75.96% 28.50 MiB    > preloaded-images-k8s-v8-v1....: 445.76 MiB / 579.81 MiB  76.88% 28.50 MiB    > preloaded-images-k8s-v8-v1....: 453.73 MiB / 579.81 MiB  78.26% 28.87 MiB    > preloaded-images-k8s-v8-v1....: 462.03 MiB / 579.81 MiB  79.69% 28.87 MiB    > preloaded-images-k8s-v8-v1....: 468.72 MiB / 5
79.81 MiB  80.84% 28.87 MiB    > preloaded-images-k8s-v8-v1....: 476.30 MiB / 579.81 MiB  82.15% 29.44 MiB    > preloaded-images-k8s-v8-v1....: 480.84 MiB / 579.81 MiB  82.93% 29.44 MiB    > preloaded-images-k8s-v8-v1....: 486.86 MiB / 579.81 MiB  83.97% 29.44 MiB    > preloaded-images-k8s-v8-v1....: 496.39 MiB / 579.81 MiB  85.61% 29.70 MiB    > preloaded-images-k8s-v8-v1....: 501.09 MiB / 579.81 MiB  86.42% 29.70 MiB    > preloaded-images-k8s-v8-v1....: 505.83 MiB / 579.81 MiB  87.24% 29.70 MiB    > preloaded-images-k8s-v8-v1....: 511.52 MiB / 579.81 MiB  88.22% 29.41 MiB    > preloaded-images-k8s-v8-v1....: 516.83 MiB / 579.81 MiB  89.14% 29.41 MiB    > preloaded-images-k8s-v8-v1....: 521.25 MiB / 579.81 MiB  89.90% 29.41 MiB    > preloaded-images-k8s-v8-v1....: 524.38 MiB / 579.81 MiB  90.44% 28.90 MiB    > preloaded-images-k8s-v8-v1....: 528.61 MiB / 579.81 MiB  91.17% 28.90 MiB    > preloaded-images-k8s-v8-v1....: 536.24 MiB / 579.81 MiB  92.49% 28.90 MiB    > preloaded-images-k8s-v8-v1....: 542.32 MiB
/ 579.81 MiB  93.53% 28.96 MiB    > preloaded-images-k8s-v8-v1....: 546.07 MiB / 579.81 MiB  94.18% 28.96 MiB    > preloaded-images-k8s-v8-v1....: 553.99 MiB / 579.81 MiB  95.55% 28.96 MiB    > preloaded-images-k8s-v8-v1....: 558.51 MiB / 579.81 MiB  96.33% 28.83 MiB    > preloaded-images-k8s-v8-v1....: 563.54 MiB / 579.81 MiB  97.19% 28.83 MiB    > preloaded-images-k8s-v8-v1....: 568.16 MiB / 579.81 MiB  97.99% 28.83 MiB    > preloaded-images-k8s-v8-v1....: 572.63 MiB / 579.81 MiB  98.76% 28.49 MiB    > preloaded-images-k8s-v8-v1....: 578.14 MiB / 579.81 MiB  99.71% 28.49 MiB    > preloaded-images-k8s-v8-v1....: 579.81 MiB / 579.81 MiB  100.00% 29.84 Mi! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,
FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1050-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1050-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1050-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.1095788440.exe start -p stopped-upgrade-714093 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Non-zero exit: /tmp/minikube-v1.17.0.1095788440.exe start -p stopped-upgrade-714093 --memory=2200 --vm-driver=docker  --container-runtime=crio: exit status 109 (13m9.003737805s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-714093] minikube v1.17.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig2509342599
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-714093 in cluster stopped-upgrade-714093
	* Downloading Kubernetes v1.20.2 preload ...
	* Updating the running docker "stopped-upgrade-714093" container ...
	* Stopping node "stopped-upgrade-714093"  ...
	* Powering off "stopped-upgrade-714093" via SSH ...
	* Starting control plane node stopped-upgrade-714093 in cluster stopped-upgrade-714093
	* Downloading Kubernetes v1.20.2 preload ...
	* Restarting existing docker container for "stopped-upgrade-714093" ...
	* Found network options:
	  - NO_PROXY=192.168.70.110
	* Preparing Kubernetes v1.20.2 on CRI-O 1.19.1 ...
	  - env NO_PROXY=192.168.70.110
	  - Generating certificates and keys ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
	  - Booting up control plane ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- 
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ W
W| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
[K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW[
K| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- 
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ W
W| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
[K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW[
K| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- 
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ W
W| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
[K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW[
K| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- 
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ W
W| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
[K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW[
K| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- 
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ W
W| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
[K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
	  - Generating certificates and keys ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
	  - Booting up control plane ...\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ 
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- W
W\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
[K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW[
K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW|
WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ 
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- W
W\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
[K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW[
K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW|
WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ 
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- W
W\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
[K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW[
K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW|
WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ 
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- W
W\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
[K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW[
K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW|
WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ 
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- W
W\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
[K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v8-v1....: 3.79 MiB / 579.81 MiB [>__] 0.65% ? p/s ?    > preloaded-images-k8s-v8-v1....: 9.59 MiB / 579.81 MiB [>__] 1.65% ? p/s ?    > preloaded-images-k8s-v8-v1....: 15.57 MiB / 579.81 MiB [>_] 2.69% ? p/s ?    > preloaded-images-k8s-v8-v1....: 23.49 MiB / 579.81 MiB  4.05% 32.84 MiB p    > preloaded-images-k8s-v8-v1....: 28.13 MiB / 579.81 MiB  4.85% 32.84 MiB p    > preloaded-images-k8s-v8-v1....: 31.88 MiB / 579.81 MiB  5.50% 32.84 MiB p    > preloaded-images-k8s-v8-v1....: 37.87 MiB / 579.81 MiB  6.53% 32.27 MiB p    > preloaded-images-k8s-v8-v1....: 43.27 MiB / 579.81 MiB  7.46% 32.27 MiB p    > preloaded-images-k8s-v8-v1....: 47.74 MiB / 579.81 MiB  8.23% 32.27 MiB p    > preloaded-images-k8s-v8-v1....: 52.37 MiB / 579.81 MiB  9.03% 31.75 MiB p    > preloaded-images-k8s-v8-v1....: 57.53 MiB / 579.81 MiB  9.92% 31.75 MiB p    > preloaded-images-k8s-v8-v1....: 61.00 MiB / 579.81 MiB  10.52% 31.75 MiB     > preloaded-images-k8s-v8-v1....: 67.16 MiB / 579.81 MiB  11.58
% 31.29 MiB     > preloaded-images-k8s-v8-v1....: 71.10 MiB / 579.81 MiB  12.26% 31.29 MiB     > preloaded-images-k8s-v8-v1....: 82.03 MiB / 579.81 MiB  14.15% 31.29 MiB     > preloaded-images-k8s-v8-v1....: 85.65 MiB / 579.81 MiB  14.77% 31.26 MiB     > preloaded-images-k8s-v8-v1....: 92.81 MiB / 579.81 MiB  16.01% 31.26 MiB     > preloaded-images-k8s-v8-v1....: 99.16 MiB / 579.81 MiB  17.10% 31.26 MiB     > preloaded-images-k8s-v8-v1....: 106.46 MiB / 579.81 MiB  18.36% 31.48 MiB    > preloaded-images-k8s-v8-v1....: 111.80 MiB / 579.81 MiB  19.28% 31.48 MiB    > preloaded-images-k8s-v8-v1....: 117.36 MiB / 579.81 MiB  20.24% 31.48 MiB    > preloaded-images-k8s-v8-v1....: 123.24 MiB / 579.81 MiB  21.25% 31.25 MiB    > preloaded-images-k8s-v8-v1....: 133.06 MiB / 579.81 MiB  22.95% 31.25 MiB    > preloaded-images-k8s-v8-v1....: 138.20 MiB / 579.81 MiB  23.84% 31.25 MiB    > preloaded-images-k8s-v8-v1....: 142.00 MiB / 579.81 MiB  24.49% 31.25 MiB    > preloaded-images-k8s-v8-v1....: 147.19 MiB / 579.81 MiB  2
5.39% 31.25 MiB    > preloaded-images-k8s-v8-v1....: 155.76 MiB / 579.81 MiB  26.86% 31.25 MiB    > preloaded-images-k8s-v8-v1....: 163.30 MiB / 579.81 MiB  28.16% 31.53 MiB    > preloaded-images-k8s-v8-v1....: 169.82 MiB / 579.81 MiB  29.29% 31.53 MiB    > preloaded-images-k8s-v8-v1....: 178.78 MiB / 579.81 MiB  30.83% 31.53 MiB    > preloaded-images-k8s-v8-v1....: 182.66 MiB / 579.81 MiB  31.50% 31.58 MiB    > preloaded-images-k8s-v8-v1....: 185.73 MiB / 579.81 MiB  32.03% 31.58 MiB    > preloaded-images-k8s-v8-v1....: 193.93 MiB / 579.81 MiB  33.45% 31.58 MiB    > preloaded-images-k8s-v8-v1....: 199.01 MiB / 579.81 MiB  34.32% 31.30 MiB    > preloaded-images-k8s-v8-v1....: 204.30 MiB / 579.81 MiB  35.24% 31.30 MiB    > preloaded-images-k8s-v8-v1....: 210.82 MiB / 579.81 MiB  36.36% 31.30 MiB    > preloaded-images-k8s-v8-v1....: 216.44 MiB / 579.81 MiB  37.33% 31.15 MiB    > preloaded-images-k8s-v8-v1....: 225.18 MiB / 579.81 MiB  38.84% 31.15 MiB    > preloaded-images-k8s-v8-v1....: 230.00 MiB / 579.81 MiB
39.67% 31.15 MiB    > preloaded-images-k8s-v8-v1....: 238.42 MiB / 579.81 MiB  41.12% 31.51 MiB    > preloaded-images-k8s-v8-v1....: 249.32 MiB / 579.81 MiB  43.00% 31.51 MiB    > preloaded-images-k8s-v8-v1....: 252.72 MiB / 579.81 MiB  43.59% 31.51 MiB    > preloaded-images-k8s-v8-v1....: 260.97 MiB / 579.81 MiB  45.01% 31.90 MiB    > preloaded-images-k8s-v8-v1....: 265.41 MiB / 579.81 MiB  45.78% 31.90 MiB    > preloaded-images-k8s-v8-v1....: 269.97 MiB / 579.81 MiB  46.56% 31.90 MiB    > preloaded-images-k8s-v8-v1....: 276.63 MiB / 579.81 MiB  47.71% 31.52 MiB    > preloaded-images-k8s-v8-v1....: 284.33 MiB / 579.81 MiB  49.04% 31.52 MiB    > preloaded-images-k8s-v8-v1....: 287.14 MiB / 579.81 MiB  49.52% 31.52 MiB    > preloaded-images-k8s-v8-v1....: 292.38 MiB / 579.81 MiB  50.43% 31.18 MiB    > preloaded-images-k8s-v8-v1....: 294.27 MiB / 579.81 MiB  50.75% 31.18 MiB    > preloaded-images-k8s-v8-v1....: 301.23 MiB / 579.81 MiB  51.95% 31.18 MiB    > preloaded-images-k8s-v8-v1....: 306.35 MiB / 579.81
MiB  52.84% 30.67 MiB    > preloaded-images-k8s-v8-v1....: 309.60 MiB / 579.81 MiB  53.40% 30.67 MiB    > preloaded-images-k8s-v8-v1....: 317.06 MiB / 579.81 MiB  54.68% 30.67 MiB    > preloaded-images-k8s-v8-v1....: 320.47 MiB / 579.81 MiB  55.27% 30.21 MiB    > preloaded-images-k8s-v8-v1....: 324.95 MiB / 579.81 MiB  56.04% 30.21 MiB    > preloaded-images-k8s-v8-v1....: 329.42 MiB / 579.81 MiB  56.81% 30.21 MiB    > preloaded-images-k8s-v8-v1....: 334.65 MiB / 579.81 MiB  57.72% 29.79 MiB    > preloaded-images-k8s-v8-v1....: 337.57 MiB / 579.81 MiB  58.22% 29.79 MiB    > preloaded-images-k8s-v8-v1....: 342.75 MiB / 579.81 MiB  59.11% 29.79 MiB    > preloaded-images-k8s-v8-v1....: 345.85 MiB / 579.81 MiB  59.65% 29.07 MiB    > preloaded-images-k8s-v8-v1....: 352.09 MiB / 579.81 MiB  60.72% 29.07 MiB    > preloaded-images-k8s-v8-v1....: 357.90 MiB / 579.81 MiB  61.73% 29.07 MiB    > preloaded-images-k8s-v8-v1....: 363.46 MiB / 579.81 MiB  62.69% 29.09 MiB    > preloaded-images-k8s-v8-v1....: 365.96 MiB / 579.
81 MiB  63.12% 29.09 MiB    > preloaded-images-k8s-v8-v1....: 372.76 MiB / 579.81 MiB  64.29% 29.09 MiB    > preloaded-images-k8s-v8-v1....: 378.21 MiB / 579.81 MiB  65.23% 28.80 MiB    > preloaded-images-k8s-v8-v1....: 382.07 MiB / 579.81 MiB  65.90% 28.80 MiB    > preloaded-images-k8s-v8-v1....: 387.68 MiB / 579.81 MiB  66.86% 28.80 MiB    > preloaded-images-k8s-v8-v1....: 395.55 MiB / 579.81 MiB  68.22% 28.80 MiB    > preloaded-images-k8s-v8-v1....: 399.77 MiB / 579.81 MiB  68.95% 28.80 MiB    > preloaded-images-k8s-v8-v1....: 402.87 MiB / 579.81 MiB  69.48% 28.80 MiB    > preloaded-images-k8s-v8-v1....: 406.22 MiB / 579.81 MiB  70.06% 28.09 MiB    > preloaded-images-k8s-v8-v1....: 411.47 MiB / 579.81 MiB  70.97% 28.09 MiB    > preloaded-images-k8s-v8-v1....: 419.28 MiB / 579.81 MiB  72.31% 28.09 MiB    > preloaded-images-k8s-v8-v1....: 425.26 MiB / 579.81 MiB  73.34% 28.33 MiB    > preloaded-images-k8s-v8-v1....: 429.54 MiB / 579.81 MiB  74.08% 28.33 MiB    > preloaded-images-k8s-v8-v1....: 433.01 MiB / 5
79.81 MiB  74.68% 28.33 MiB    > preloaded-images-k8s-v8-v1....: 438.45 MiB / 579.81 MiB  75.62% 27.92 MiB    > preloaded-images-k8s-v8-v1....: 442.39 MiB / 579.81 MiB  76.30% 27.92 MiB    > preloaded-images-k8s-v8-v1....: 446.42 MiB / 579.81 MiB  76.99% 27.92 MiB    > preloaded-images-k8s-v8-v1....: 451.45 MiB / 579.81 MiB  77.86% 27.52 MiB    > preloaded-images-k8s-v8-v1....: 454.54 MiB / 579.81 MiB  78.40% 27.52 MiB    > preloaded-images-k8s-v8-v1....: 459.90 MiB / 579.81 MiB  79.32% 27.52 MiB    > preloaded-images-k8s-v8-v1....: 464.92 MiB / 579.81 MiB  80.19% 27.19 MiB    > preloaded-images-k8s-v8-v1....: 470.48 MiB / 579.81 MiB  81.14% 27.19 MiB    > preloaded-images-k8s-v8-v1....: 476.85 MiB / 579.81 MiB  82.24% 27.19 MiB    > preloaded-images-k8s-v8-v1....: 482.98 MiB / 579.81 MiB  83.30% 27.38 MiB    > preloaded-images-k8s-v8-v1....: 488.35 MiB / 579.81 MiB  84.23% 27.38 MiB    > preloaded-images-k8s-v8-v1....: 495.69 MiB / 579.81 MiB  85.49% 27.38 MiB    > preloaded-images-k8s-v8-v1....: 501.05 MiB
/ 579.81 MiB  86.42% 27.55 MiB    > preloaded-images-k8s-v8-v1....: 507.13 MiB / 579.81 MiB  87.47% 27.55 MiB    > preloaded-images-k8s-v8-v1....: 511.51 MiB / 579.81 MiB  88.22% 27.55 MiB    > preloaded-images-k8s-v8-v1....: 517.56 MiB / 579.81 MiB  89.26% 27.55 MiB    > preloaded-images-k8s-v8-v1....: 520.30 MiB / 579.81 MiB  89.74% 27.55 MiB    > preloaded-images-k8s-v8-v1....: 527.55 MiB / 579.81 MiB  90.99% 27.55 MiB    > preloaded-images-k8s-v8-v1....: 530.66 MiB / 579.81 MiB  91.52% 27.18 MiB    > preloaded-images-k8s-v8-v1....: 534.66 MiB / 579.81 MiB  92.21% 27.18 MiB    > preloaded-images-k8s-v8-v1....: 539.49 MiB / 579.81 MiB  93.05% 27.18 MiB    > preloaded-images-k8s-v8-v1....: 543.72 MiB / 579.81 MiB  93.78% 26.83 MiB    > preloaded-images-k8s-v8-v1....: 549.08 MiB / 579.81 MiB  94.70% 26.83 MiB    > preloaded-images-k8s-v8-v1....: 554.41 MiB / 579.81 MiB  95.62% 26.83 MiB    > preloaded-images-k8s-v8-v1....: 560.08 MiB / 579.81 MiB  96.60% 26.86 MiB    > preloaded-images-k8s-v8-v1....: 565.58 M
iB / 579.81 MiB  97.55% 26.86 MiB    > preloaded-images-k8s-v8-v1....: 570.06 MiB / 579.81 MiB  98.32% 26.86 MiB    > preloaded-images-k8s-v8-v1....: 575.75 MiB / 579.81 MiB  99.30% 26.81 MiB    > preloaded-images-k8s-v8-v1....: 579.81 MiB / 579.81 MiB  100.00% 27.44 Mi! Due to issues with CRI-O post v1.17.3, we need to restart your cluster.
	! See details at https://github.com/kubernetes/minikube/issues/8861
	    > preloaded-images-k8s-v8-v1....: 4.92 MiB / 579.81 MiB [>__] 0.85% ? p/s ?    > preloaded-images-k8s-v8-v1....: 13.00 MiB / 579.81 MiB [>_] 2.24% ? p/s ?    > preloaded-images-k8s-v8-v1....: 19.31 MiB / 579.81 MiB [>_] 3.33% ? p/s ?    > preloaded-images-k8s-v8-v1....: 24.89 MiB / 579.81 MiB  4.29% 33.29 MiB p    > preloaded-images-k8s-v8-v1....: 32.86 MiB / 579.81 MiB  5.67% 33.29 MiB p    > preloaded-images-k8s-v8-v1....: 37.95 MiB / 579.81 MiB  6.55% 33.29 MiB p    > preloaded-images-k8s-v8-v1....: 40.86 MiB / 579.81 MiB  7.05% 32.86 MiB p    > preloaded-images-k8s-v8-v1....: 45.05 MiB / 579.81 MiB  7.77% 32.86 MiB p    > preloaded-images-k8s-v8-v1....: 51.14 MiB / 579.81 MiB  8.82% 32.86 MiB p    > preloaded-images-k8s-v8-v1....: 54.30 MiB / 579.81 MiB  9.36% 32.18 MiB p    > preloaded-images-k8s-v8-v1....: 58.55 MiB / 579.81 MiB  10.10% 32.18 MiB     > preloaded-images-k8s-v8-v1....: 65.40 MiB / 579.81 MiB  11.28% 32.18 MiB     > preloaded-images-k8s-v8-v1....: 68.97 MiB / 579.81 MiB  11.89
% 31.68 MiB     > preloaded-images-k8s-v8-v1....: 75.31 MiB / 579.81 MiB  12.99% 31.68 MiB     > preloaded-images-k8s-v8-v1....: 83.84 MiB / 579.81 MiB  14.46% 31.68 MiB     > preloaded-images-k8s-v8-v1....: 87.73 MiB / 579.81 MiB  15.13% 31.66 MiB     > preloaded-images-k8s-v8-v1....: 92.95 MiB / 579.81 MiB  16.03% 31.66 MiB     > preloaded-images-k8s-v8-v1....: 99.93 MiB / 579.81 MiB  17.23% 31.66 MiB     > preloaded-images-k8s-v8-v1....: 106.98 MiB / 579.81 MiB  18.45% 31.68 MiB    > preloaded-images-k8s-v8-v1....: 112.44 MiB / 579.81 MiB  19.39% 31.68 MiB    > preloaded-images-k8s-v8-v1....: 119.76 MiB / 579.81 MiB  20.65% 31.68 MiB    > preloaded-images-k8s-v8-v1....: 128.46 MiB / 579.81 MiB  22.16% 31.95 MiB    > preloaded-images-k8s-v8-v1....: 134.11 MiB / 579.81 MiB  23.13% 31.95 MiB    > preloaded-images-k8s-v8-v1....: 139.40 MiB / 579.81 MiB  24.04% 31.95 MiB    > preloaded-images-k8s-v8-v1....: 144.35 MiB / 579.81 MiB  24.90% 31.60 MiB    > preloaded-images-k8s-v8-v1....: 150.85 MiB / 579.81 MiB  2
6.02% 31.60 MiB    > preloaded-images-k8s-v8-v1....: 156.31 MiB / 579.81 MiB  26.96% 31.60 MiB    > preloaded-images-k8s-v8-v1....: 161.72 MiB / 579.81 MiB  27.89% 31.43 MiB    > preloaded-images-k8s-v8-v1....: 169.61 MiB / 579.81 MiB  29.25% 31.43 MiB    > preloaded-images-k8s-v8-v1....: 178.78 MiB / 579.81 MiB  30.83% 31.43 MiB    > preloaded-images-k8s-v8-v1....: 184.45 MiB / 579.81 MiB  31.81% 31.84 MiB    > preloaded-images-k8s-v8-v1....: 194.66 MiB / 579.81 MiB  33.57% 31.84 MiB    > preloaded-images-k8s-v8-v1....: 201.33 MiB / 579.81 MiB  34.72% 31.84 MiB    > preloaded-images-k8s-v8-v1....: 208.90 MiB / 579.81 MiB  36.03% 32.42 MiB    > preloaded-images-k8s-v8-v1....: 213.33 MiB / 579.81 MiB  36.79% 32.42 MiB    > preloaded-images-k8s-v8-v1....: 221.98 MiB / 579.81 MiB  38.29% 32.42 MiB    > preloaded-images-k8s-v8-v1....: 228.78 MiB / 579.81 MiB  39.46% 32.46 MiB    > preloaded-images-k8s-v8-v1....: 236.79 MiB / 579.81 MiB  40.84% 32.46 MiB    > preloaded-images-k8s-v8-v1....: 241.41 MiB / 579.81 MiB
41.64% 32.46 MiB    > preloaded-images-k8s-v8-v1....: 249.04 MiB / 579.81 MiB  42.95% 32.55 MiB    > preloaded-images-k8s-v8-v1....: 252.75 MiB / 579.81 MiB  43.59% 32.55 MiB    > preloaded-images-k8s-v8-v1....: 255.93 MiB / 579.81 MiB  44.14% 32.55 MiB    > preloaded-images-k8s-v8-v1....: 262.14 MiB / 579.81 MiB  45.21% 31.86 MiB    > preloaded-images-k8s-v8-v1....: 262.99 MiB / 579.81 MiB  45.36% 31.86 MiB    > preloaded-images-k8s-v8-v1....: 266.66 MiB / 579.81 MiB  45.99% 31.86 MiB    > preloaded-images-k8s-v8-v1....: 269.53 MiB / 579.81 MiB  46.49% 30.60 MiB    > preloaded-images-k8s-v8-v1....: 271.97 MiB / 579.81 MiB  46.91% 30.60 MiB    > preloaded-images-k8s-v8-v1....: 273.38 MiB / 579.81 MiB  47.15% 30.60 MiB    > preloaded-images-k8s-v8-v1....: 280.43 MiB / 579.81 MiB  48.37% 29.79 MiB    > preloaded-images-k8s-v8-v1....: 283.98 MiB / 579.81 MiB  48.98% 29.79 MiB    > preloaded-images-k8s-v8-v1....: 288.87 MiB / 579.81 MiB  49.82% 29.79 MiB    > preloaded-images-k8s-v8-v1....: 294.62 MiB / 579.81
MiB  50.81% 29.40 MiB    > preloaded-images-k8s-v8-v1....: 299.24 MiB / 579.81 MiB  51.61% 29.40 MiB    > preloaded-images-k8s-v8-v1....: 304.00 MiB / 579.81 MiB  52.43% 29.40 MiB    > preloaded-images-k8s-v8-v1....: 309.14 MiB / 579.81 MiB  53.32% 29.06 MiB    > preloaded-images-k8s-v8-v1....: 315.66 MiB / 579.81 MiB  54.44% 29.06 MiB    > preloaded-images-k8s-v8-v1....: 323.27 MiB / 579.81 MiB  55.75% 29.06 MiB    > preloaded-images-k8s-v8-v1....: 329.33 MiB / 579.81 MiB  56.80% 29.36 MiB    > preloaded-images-k8s-v8-v1....: 335.05 MiB / 579.81 MiB  57.79% 29.36 MiB    > preloaded-images-k8s-v8-v1....: 341.01 MiB / 579.81 MiB  58.81% 29.36 MiB    > preloaded-images-k8s-v8-v1....: 348.42 MiB / 579.81 MiB  60.09% 29.52 MiB    > preloaded-images-k8s-v8-v1....: 354.61 MiB / 579.81 MiB  61.16% 29.52 MiB    > preloaded-images-k8s-v8-v1....: 361.76 MiB / 579.81 MiB  62.39% 29.52 MiB    > preloaded-images-k8s-v8-v1....: 365.48 MiB / 579.81 MiB  63.03% 29.45 MiB    > preloaded-images-k8s-v8-v1....: 371.51 MiB / 579.
81 MiB  64.07% 29.45 MiB    > preloaded-images-k8s-v8-v1....: 377.04 MiB / 579.81 MiB  65.03% 29.45 MiB    > preloaded-images-k8s-v8-v1....: 383.15 MiB / 579.81 MiB  66.08% 29.45 MiB    > preloaded-images-k8s-v8-v1....: 393.16 MiB / 579.81 MiB  67.81% 29.45 MiB    > preloaded-images-k8s-v8-v1....: 397.71 MiB / 579.81 MiB  68.59% 29.45 MiB    > preloaded-images-k8s-v8-v1....: 403.79 MiB / 579.81 MiB  69.64% 29.77 MiB    > preloaded-images-k8s-v8-v1....: 410.12 MiB / 579.81 MiB  70.73% 29.77 MiB    > preloaded-images-k8s-v8-v1....: 414.53 MiB / 579.81 MiB  71.49% 29.77 MiB    > preloaded-images-k8s-v8-v1....: 421.68 MiB / 579.81 MiB  72.73% 29.77 MiB    > preloaded-images-k8s-v8-v1....: 430.09 MiB / 579.81 MiB  74.18% 29.77 MiB    > preloaded-images-k8s-v8-v1....: 439.03 MiB / 579.81 MiB  75.72% 29.77 MiB    > preloaded-images-k8s-v8-v1....: 445.98 MiB / 579.81 MiB  76.92% 30.46 MiB    > preloaded-images-k8s-v8-v1....: 453.22 MiB / 579.81 MiB  78.17% 30.46 MiB    > preloaded-images-k8s-v8-v1....: 460.77 MiB / 5
79.81 MiB  79.47% 30.46 MiB    > preloaded-images-k8s-v8-v1....: 468.62 MiB / 579.81 MiB  80.82% 30.93 MiB    > preloaded-images-k8s-v8-v1....: 473.48 MiB / 579.81 MiB  81.66% 30.93 MiB    > preloaded-images-k8s-v8-v1....: 480.74 MiB / 579.81 MiB  82.91% 30.93 MiB    > preloaded-images-k8s-v8-v1....: 488.29 MiB / 579.81 MiB  84.22% 31.05 MiB    > preloaded-images-k8s-v8-v1....: 493.26 MiB / 579.81 MiB  85.07% 31.05 MiB    > preloaded-images-k8s-v8-v1....: 497.90 MiB / 579.81 MiB  85.87% 31.05 MiB    > preloaded-images-k8s-v8-v1....: 505.37 MiB / 579.81 MiB  87.16% 30.88 MiB    > preloaded-images-k8s-v8-v1....: 512.77 MiB / 579.81 MiB  88.44% 30.88 MiB    > preloaded-images-k8s-v8-v1....: 516.63 MiB / 579.81 MiB  89.10% 30.88 MiB    > preloaded-images-k8s-v8-v1....: 523.80 MiB / 579.81 MiB  90.34% 30.87 MiB    > preloaded-images-k8s-v8-v1....: 531.55 MiB / 579.81 MiB  91.68% 30.87 MiB    > preloaded-images-k8s-v8-v1....: 540.91 MiB / 579.81 MiB  93.29% 30.87 MiB    > preloaded-images-k8s-v8-v1....: 546.86 MiB
/ 579.81 MiB  94.32% 31.36 MiB    > preloaded-images-k8s-v8-v1....: 553.21 MiB / 579.81 MiB  95.41% 31.36 MiB    > preloaded-images-k8s-v8-v1....: 561.02 MiB / 579.81 MiB  96.76% 31.36 MiB    > preloaded-images-k8s-v8-v1....: 567.88 MiB / 579.81 MiB  97.94% 31.60 MiB    > preloaded-images-k8s-v8-v1....: 571.78 MiB / 579.81 MiB  98.62% 31.60 MiB    > preloaded-images-k8s-v8-v1....: 574.97 MiB / 579.81 MiB  99.17% 31.60 MiB    > preloaded-images-k8s-v8-v1....: 579.81 MiB / 579.81 MiB  100.00% 30.21 Mi! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1050-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
	
	* 
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1050-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1050-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
E1128 00:35:10.680686 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/old-k8s-version-428712/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.1095788440.exe start -p stopped-upgrade-714093 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1128 00:35:15.801032 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/old-k8s-version-428712/client.crt: no such file or directory
E1128 00:35:26.041781 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/old-k8s-version-428712/client.crt: no such file or directory
E1128 00:35:46.522253 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/old-k8s-version-428712/client.crt: no such file or directory
E1128 00:36:16.207524 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
E1128 00:36:27.483051 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/old-k8s-version-428712/client.crt: no such file or directory
E1128 00:36:33.163623 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
E1128 00:37:49.403348 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/old-k8s-version-428712/client.crt: no such file or directory
E1128 00:38:20.411250 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
E1128 00:39:06.164976 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
E1128 00:39:23.120134 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
E1128 00:40:05.559883 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/old-k8s-version-428712/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Non-zero exit: /tmp/minikube-v1.17.0.1095788440.exe start -p stopped-upgrade-714093 --memory=2200 --vm-driver=docker  --container-runtime=crio: exit status 109 (12m31.493865348s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-714093] minikube v1.17.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig3850418331
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-714093 in cluster stopped-upgrade-714093
	* Downloading Kubernetes v1.20.2 preload ...
	* Updating the running docker "stopped-upgrade-714093" container ...
	* Stopping node "stopped-upgrade-714093"  ...
	* Powering off "stopped-upgrade-714093" via SSH ...
	* Starting control plane node stopped-upgrade-714093 in cluster stopped-upgrade-714093
	* Restarting existing docker container for "stopped-upgrade-714093" ...
	* Found network options:
	  - NO_PROXY=192.168.70.110
	* Preparing Kubernetes v1.20.2 on CRI-O 1.19.1 ...
	  - env NO_PROXY=192.168.70.110
	  - Generating certificates and keys ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
	  - Booting up control plane ...\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ 
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- W
W\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
[K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW[
K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW|
WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ 
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- W
W\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
[K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW[
K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW|
WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ 
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- W
W\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
[K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW[
K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW|
WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ 
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- W
W\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
[K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW[
K\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW|
WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ 
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- W
W\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
[K/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
	  - Generating certificates and keys ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
	  - Booting up control plane ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- 
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ W
W| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
[K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW[
K| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- 
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ W
W| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
[K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW[
K| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- 
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ W
W| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
[K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW[
K| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- 
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ W
W| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
[K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW[
K| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/
WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- 
WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ W
W| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
[K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v8-v1....: 6.36 MiB / 579.81 MiB [>__] 1.10% ? p/s ?    > preloaded-images-k8s-v8-v1....: 17.98 MiB / 579.81 MiB [>_] 3.10% ? p/s ?    > preloaded-images-k8s-v8-v1....: 30.50 MiB / 579.81 MiB [>_] 5.26% ? p/s ?    > preloaded-images-k8s-v8-v1....: 40.61 MiB / 579.81 MiB  7.00% 57.08 MiB p    > preloaded-images-k8s-v8-v1....: 51.94 MiB / 579.81 MiB  8.96% 57.08 MiB p    > preloaded-images-k8s-v8-v1....: 64.84 MiB / 579.81 MiB  11.18% 57.08 MiB     > preloaded-images-k8s-v8-v1....: 75.31 MiB / 579.81 MiB  12.99% 57.13 MiB     > preloaded-images-k8s-v8-v1....: 88.00 MiB / 579.81 MiB  15.18% 57.13 MiB     > preloaded-images-k8s-v8-v1....: 98.31 MiB / 579.81 MiB  16.96% 57.13 MiB     > preloaded-images-k8s-v8-v1....: 110.62 MiB / 579.81 MiB  19.08% 57.24 MiB    > preloaded-images-k8s-v8-v1....: 118.53 MiB / 579.81 MiB  20.44% 57.24 MiB    > preloaded-images-k8s-v8-v1....: 131.86 MiB / 579.81 MiB  22.74% 57.24 MiB    > preloaded-images-k8s-v8-v1....: 144.00 MiB / 579.81 MiB  24.8
4% 57.14 MiB    > preloaded-images-k8s-v8-v1....: 155.23 MiB / 579.81 MiB  26.77% 57.14 MiB    > preloaded-images-k8s-v8-v1....: 167.31 MiB / 579.81 MiB  28.86% 57.14 MiB    > preloaded-images-k8s-v8-v1....: 179.52 MiB / 579.81 MiB  30.96% 57.27 MiB    > preloaded-images-k8s-v8-v1....: 194.55 MiB / 579.81 MiB  33.55% 57.27 MiB    > preloaded-images-k8s-v8-v1....: 203.66 MiB / 579.81 MiB  35.12% 57.27 MiB    > preloaded-images-k8s-v8-v1....: 211.64 MiB / 579.81 MiB  36.50% 57.03 MiB    > preloaded-images-k8s-v8-v1....: 224.47 MiB / 579.81 MiB  38.71% 57.03 MiB    > preloaded-images-k8s-v8-v1....: 239.95 MiB / 579.81 MiB  41.38% 57.03 MiB    > preloaded-images-k8s-v8-v1....: 253.91 MiB / 579.81 MiB  43.79% 57.90 MiB    > preloaded-images-k8s-v8-v1....: 269.80 MiB / 579.81 MiB  46.53% 57.90 MiB    > preloaded-images-k8s-v8-v1....: 283.44 MiB / 579.81 MiB  48.88% 57.90 MiB    > preloaded-images-k8s-v8-v1....: 295.78 MiB / 579.81 MiB  51.01% 58.66 MiB    > preloaded-images-k8s-v8-v1....: 307.62 MiB / 579.81 MiB  5
3.06% 58.66 MiB    > preloaded-images-k8s-v8-v1....: 319.75 MiB / 579.81 MiB  55.15% 58.66 MiB    > preloaded-images-k8s-v8-v1....: 332.91 MiB / 579.81 MiB  57.42% 58.87 MiB    > preloaded-images-k8s-v8-v1....: 346.28 MiB / 579.81 MiB  59.72% 58.87 MiB    > preloaded-images-k8s-v8-v1....: 358.62 MiB / 579.81 MiB  61.85% 58.87 MiB    > preloaded-images-k8s-v8-v1....: 369.56 MiB / 579.81 MiB  63.74% 59.01 MiB    > preloaded-images-k8s-v8-v1....: 384.20 MiB / 579.81 MiB  66.26% 59.01 MiB    > preloaded-images-k8s-v8-v1....: 395.89 MiB / 579.81 MiB  68.28% 59.01 MiB    > preloaded-images-k8s-v8-v1....: 408.52 MiB / 579.81 MiB  70.46% 59.40 MiB    > preloaded-images-k8s-v8-v1....: 424.00 MiB / 579.81 MiB  73.13% 59.40 MiB    > preloaded-images-k8s-v8-v1....: 435.61 MiB / 579.81 MiB  75.13% 59.40 MiB    > preloaded-images-k8s-v8-v1....: 449.00 MiB / 579.81 MiB  77.44% 59.92 MiB    > preloaded-images-k8s-v8-v1....: 464.56 MiB / 579.81 MiB  80.12% 59.92 MiB    > preloaded-images-k8s-v8-v1....: 476.20 MiB / 579.81 MiB
82.13% 59.92 MiB    > preloaded-images-k8s-v8-v1....: 491.87 MiB / 579.81 MiB  84.83% 60.66 MiB    > preloaded-images-k8s-v8-v1....: 505.12 MiB / 579.81 MiB  87.12% 60.66 MiB    > preloaded-images-k8s-v8-v1....: 518.09 MiB / 579.81 MiB  89.36% 60.66 MiB    > preloaded-images-k8s-v8-v1....: 531.03 MiB / 579.81 MiB  91.59% 60.96 MiB    > preloaded-images-k8s-v8-v1....: 543.94 MiB / 579.81 MiB  93.81% 60.96 MiB    > preloaded-images-k8s-v8-v1....: 556.50 MiB / 579.81 MiB  95.98% 60.96 MiB    > preloaded-images-k8s-v8-v1....: 569.50 MiB / 579.81 MiB  98.22% 61.16 MiB    > preloaded-images-k8s-v8-v1....: 579.81 MiB / 579.81 MiB  100.00% 63.38 Mi! Due to issues with CRI-O post v1.17.3, we need to restart your cluster.
	! See details at https://github.com/kubernetes/minikube/issues/8861
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1050-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
	
	* 
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1050-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1050-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
version_upgrade_test.go:202: legacy v1.17.0 start failed: exit status 109
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (2069.20s)

                                                
                                    

Test pass (275/314)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 13.2
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.1
10 TestDownloadOnly/v1.28.4/json-events 13.48
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.09
17 TestDownloadOnly/v1.29.0-rc.0/json-events 12.31
18 TestDownloadOnly/v1.29.0-rc.0/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.0/LogsDuration 0.2
23 TestDownloadOnly/DeleteAll 0.4
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.26
26 TestBinaryMirror 0.68
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
32 TestAddons/Setup 169.64
34 TestAddons/parallel/Registry 17.97
36 TestAddons/parallel/InspektorGadget 10.8
37 TestAddons/parallel/MetricsServer 5.86
40 TestAddons/parallel/CSI 42.21
41 TestAddons/parallel/Headlamp 11.58
42 TestAddons/parallel/CloudSpanner 5.66
43 TestAddons/parallel/LocalPath 53.15
44 TestAddons/parallel/NvidiaDevicePlugin 5.77
47 TestAddons/serial/GCPAuth/Namespaces 0.16
48 TestAddons/StoppedEnableDisable 12.35
49 TestCertOptions 41.82
50 TestCertExpiration 235.59
52 TestForceSystemdFlag 36.39
53 TestForceSystemdEnv 39.73
59 TestErrorSpam/setup 32.04
60 TestErrorSpam/start 0.9
61 TestErrorSpam/status 1.16
62 TestErrorSpam/pause 1.87
63 TestErrorSpam/unpause 2.09
64 TestErrorSpam/stop 1.48
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 75.3
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 34.86
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.1
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.95
76 TestFunctional/serial/CacheCmd/cache/add_local 1.19
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
78 TestFunctional/serial/CacheCmd/cache/list 0.08
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
80 TestFunctional/serial/CacheCmd/cache/cache_reload 2.25
81 TestFunctional/serial/CacheCmd/cache/delete 0.15
82 TestFunctional/serial/MinikubeKubectlCmd 0.17
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.17
84 TestFunctional/serial/ExtraConfig 37.17
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.87
87 TestFunctional/serial/LogsFileCmd 1.86
88 TestFunctional/serial/InvalidService 4.43
90 TestFunctional/parallel/ConfigCmd 0.66
91 TestFunctional/parallel/DashboardCmd 14.07
92 TestFunctional/parallel/DryRun 0.68
93 TestFunctional/parallel/InternationalLanguage 0.22
94 TestFunctional/parallel/StatusCmd 1.19
98 TestFunctional/parallel/ServiceCmdConnect 9.71
99 TestFunctional/parallel/AddonsCmd 0.18
100 TestFunctional/parallel/PersistentVolumeClaim 25.08
102 TestFunctional/parallel/SSHCmd 0.85
103 TestFunctional/parallel/CpCmd 1.57
105 TestFunctional/parallel/FileSync 0.44
106 TestFunctional/parallel/CertSync 2.53
110 TestFunctional/parallel/NodeLabels 0.12
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.89
114 TestFunctional/parallel/License 0.34
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.63
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.49
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.21
126 TestFunctional/parallel/ServiceCmd/DeployApp 7.23
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
128 TestFunctional/parallel/ProfileCmd/profile_list 0.43
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
130 TestFunctional/parallel/MountCmd/any-port 7.96
131 TestFunctional/parallel/ServiceCmd/List 0.67
132 TestFunctional/parallel/ServiceCmd/JSONOutput 0.57
133 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
134 TestFunctional/parallel/ServiceCmd/Format 0.45
135 TestFunctional/parallel/ServiceCmd/URL 0.55
136 TestFunctional/parallel/MountCmd/specific-port 2.6
137 TestFunctional/parallel/MountCmd/VerifyCleanup 2.22
138 TestFunctional/parallel/Version/short 0.1
139 TestFunctional/parallel/Version/components 1.25
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.35
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
144 TestFunctional/parallel/ImageCommands/ImageBuild 3.21
145 TestFunctional/parallel/ImageCommands/Setup 3.56
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.05
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.26
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
150 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.3
151 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.87
152 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.91
153 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
154 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.32
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.96
156 TestFunctional/delete_addon-resizer_images 0.09
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
162 TestIngressAddonLegacy/StartLegacyK8sCluster 88.59
164 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 12.06
165 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.65
169 TestJSONOutput/start/Command 75.62
170 TestJSONOutput/start/Audit 0
172 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/pause/Command 0.81
176 TestJSONOutput/pause/Audit 0
178 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/unpause/Command 0.76
182 TestJSONOutput/unpause/Audit 0
184 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/stop/Command 5.91
188 TestJSONOutput/stop/Audit 0
190 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
192 TestErrorJSONOutput 0.27
194 TestKicCustomNetwork/create_custom_network 50.15
195 TestKicCustomNetwork/use_default_bridge_network 37.05
196 TestKicExistingNetwork 37.28
197 TestKicCustomSubnet 37.49
198 TestKicStaticIP 33.69
199 TestMainNoArgs 0.07
200 TestMinikubeProfile 69.49
203 TestMountStart/serial/StartWithMountFirst 9.46
204 TestMountStart/serial/VerifyMountFirst 0.31
205 TestMountStart/serial/StartWithMountSecond 7
206 TestMountStart/serial/VerifyMountSecond 0.3
207 TestMountStart/serial/DeleteFirst 1.67
208 TestMountStart/serial/VerifyMountPostDelete 0.32
209 TestMountStart/serial/Stop 1.23
210 TestMountStart/serial/RestartStopped 8
211 TestMountStart/serial/VerifyMountPostStop 0.29
214 TestMultiNode/serial/FreshStart2Nodes 129.38
215 TestMultiNode/serial/DeployApp2Nodes 5.66
217 TestMultiNode/serial/AddNode 50.83
218 TestMultiNode/serial/ProfileList 0.37
219 TestMultiNode/serial/CopyFile 11.33
220 TestMultiNode/serial/StopNode 2.47
221 TestMultiNode/serial/StartAfterStop 12.89
222 TestMultiNode/serial/RestartKeepsNodes 120.34
223 TestMultiNode/serial/DeleteNode 5.16
224 TestMultiNode/serial/StopMultiNode 24.06
225 TestMultiNode/serial/RestartMultiNode 79.22
226 TestMultiNode/serial/ValidateNameConflict 34.23
231 TestPreload 175.2
236 TestInsufficientStorage 10.97
239 TestKubernetesUpgrade 373.62
242 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
243 TestNoKubernetes/serial/StartWithK8s 45.14
244 TestNoKubernetes/serial/StartWithStopK8s 14.17
245 TestNoKubernetes/serial/Start 9.27
246 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
247 TestNoKubernetes/serial/ProfileList 0.78
248 TestNoKubernetes/serial/Stop 1.24
249 TestNoKubernetes/serial/StartNoArgs 7.9
250 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
251 TestStoppedBinaryUpgrade/Setup 1.45
261 TestPause/serial/Start 76
262 TestPause/serial/SecondStartNoReconfiguration 35.62
263 TestPause/serial/Pause 0.85
264 TestPause/serial/VerifyStatus 0.39
265 TestPause/serial/Unpause 0.81
266 TestPause/serial/PauseAgain 1.03
267 TestPause/serial/DeletePaused 2.88
268 TestPause/serial/VerifyDeletedResources 0.41
276 TestNetworkPlugins/group/false 4.28
281 TestStartStop/group/old-k8s-version/serial/FirstStart 120.63
282 TestStartStop/group/old-k8s-version/serial/DeployApp 9.7
283 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.12
284 TestStartStop/group/old-k8s-version/serial/Stop 12.09
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
286 TestStartStop/group/old-k8s-version/serial/SecondStart 432.13
287 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
288 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
289 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.39
290 TestStartStop/group/old-k8s-version/serial/Pause 3.39
292 TestStartStop/group/no-preload/serial/FirstStart 64.57
293 TestStartStop/group/no-preload/serial/DeployApp 11.02
294 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.16
295 TestStartStop/group/no-preload/serial/Stop 12.11
296 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
297 TestStartStop/group/no-preload/serial/SecondStart 353.17
298 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 14.04
299 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
300 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.35
301 TestStartStop/group/no-preload/serial/Pause 3.47
303 TestStartStop/group/embed-certs/serial/FirstStart 83.49
304 TestStartStop/group/embed-certs/serial/DeployApp 8.5
305 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.17
306 TestStartStop/group/embed-certs/serial/Stop 12.06
307 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
308 TestStartStop/group/embed-certs/serial/SecondStart 352.3
309 TestStoppedBinaryUpgrade/MinikubeLogs 0.87
311 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 89.57
312 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 17.1
313 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.14
314 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.39
315 TestStartStop/group/embed-certs/serial/Pause 3.39
317 TestStartStop/group/newest-cni/serial/FirstStart 44.71
318 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.69
319 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.31
320 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.26
321 TestStartStop/group/newest-cni/serial/DeployApp 0
322 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.1
323 TestStartStop/group/newest-cni/serial/Stop 1.31
324 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
325 TestStartStop/group/newest-cni/serial/SecondStart 38.41
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.37
327 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 356.5
328 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
329 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
330 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.38
331 TestStartStop/group/newest-cni/serial/Pause 3.24
332 TestNetworkPlugins/group/auto/Start 74.78
333 TestNetworkPlugins/group/auto/KubeletFlags 0.36
334 TestNetworkPlugins/group/auto/NetCatPod 10.39
335 TestNetworkPlugins/group/auto/DNS 0.24
336 TestNetworkPlugins/group/auto/Localhost 0.22
337 TestNetworkPlugins/group/auto/HairPin 0.2
338 TestNetworkPlugins/group/kindnet/Start 79.82
339 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
340 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
341 TestNetworkPlugins/group/kindnet/NetCatPod 10.33
342 TestNetworkPlugins/group/kindnet/DNS 0.23
343 TestNetworkPlugins/group/kindnet/Localhost 0.2
344 TestNetworkPlugins/group/kindnet/HairPin 0.2
345 TestNetworkPlugins/group/calico/Start 76.52
346 TestNetworkPlugins/group/calico/ControllerPod 5.04
347 TestNetworkPlugins/group/calico/KubeletFlags 0.43
348 TestNetworkPlugins/group/calico/NetCatPod 14.44
349 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.08
350 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
351 TestNetworkPlugins/group/calico/DNS 0.24
352 TestNetworkPlugins/group/calico/Localhost 0.23
353 TestNetworkPlugins/group/calico/HairPin 0.2
354 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.4
355 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.81
356 TestNetworkPlugins/group/custom-flannel/Start 69.91
357 TestNetworkPlugins/group/enable-default-cni/Start 93.48
358 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.48
359 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.53
360 TestNetworkPlugins/group/custom-flannel/DNS 0.21
361 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
362 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
363 TestNetworkPlugins/group/flannel/Start 54.6
364 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.46
365 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.77
366 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
367 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
368 TestNetworkPlugins/group/enable-default-cni/HairPin 0.22
369 TestNetworkPlugins/group/bridge/Start 49.23
370 TestNetworkPlugins/group/flannel/ControllerPod 5.11
371 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
372 TestNetworkPlugins/group/flannel/NetCatPod 11.4
373 TestNetworkPlugins/group/flannel/DNS 0.3
374 TestNetworkPlugins/group/flannel/Localhost 0.29
375 TestNetworkPlugins/group/flannel/HairPin 0.33
376 TestNetworkPlugins/group/bridge/KubeletFlags 0.34
377 TestNetworkPlugins/group/bridge/NetCatPod 10.32
378 TestNetworkPlugins/group/bridge/DNS 0.24
379 TestNetworkPlugins/group/bridge/Localhost 0.21
380 TestNetworkPlugins/group/bridge/HairPin 0.2
x
+
TestDownloadOnly/v1.16.0/json-events (13.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-717158 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-717158 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.200820995s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (13.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-717158
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-717158: exit status 85 (94.547401ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-717158 | jenkins | v1.32.0 | 27 Nov 23 23:29 UTC |          |
	|         | -p download-only-717158        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 23:29:48
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 23:29:48.904628 1460657 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:29:48.904788 1460657 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:29:48.904798 1460657 out.go:309] Setting ErrFile to fd 2...
	I1127 23:29:48.904804 1460657 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:29:48.905090 1460657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-1455288/.minikube/bin
	W1127 23:29:48.905246 1460657 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17206-1455288/.minikube/config/config.json: open /home/jenkins/minikube-integration/17206-1455288/.minikube/config/config.json: no such file or directory
	I1127 23:29:48.905676 1460657 out.go:303] Setting JSON to true
	I1127 23:29:48.906730 1460657 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22338,"bootTime":1701105451,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1127 23:29:48.906802 1460657 start.go:138] virtualization:  
	I1127 23:29:48.909773 1460657 out.go:97] [download-only-717158] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1127 23:29:48.912030 1460657 out.go:169] MINIKUBE_LOCATION=17206
	W1127 23:29:48.910035 1460657 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball: no such file or directory
	I1127 23:29:48.910098 1460657 notify.go:220] Checking for updates...
	I1127 23:29:48.913760 1460657 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:29:48.915821 1460657 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17206-1455288/kubeconfig
	I1127 23:29:48.917593 1460657 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube
	I1127 23:29:48.919659 1460657 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1127 23:29:48.923927 1460657 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1127 23:29:48.924232 1460657 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:29:48.948724 1460657 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:29:48.948830 1460657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:29:49.032641 1460657 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-11-27 23:29:49.023068506 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:29:49.032788 1460657 docker.go:295] overlay module found
	I1127 23:29:49.034753 1460657 out.go:97] Using the docker driver based on user configuration
	I1127 23:29:49.034778 1460657 start.go:298] selected driver: docker
	I1127 23:29:49.034784 1460657 start.go:902] validating driver "docker" against <nil>
	I1127 23:29:49.034886 1460657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:29:49.103688 1460657 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-11-27 23:29:49.094294676 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:29:49.103841 1460657 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1127 23:29:49.104117 1460657 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1127 23:29:49.104282 1460657 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1127 23:29:49.106427 1460657 out.go:169] Using Docker driver with root privileges
	I1127 23:29:49.108795 1460657 cni.go:84] Creating CNI manager for ""
	I1127 23:29:49.108818 1460657 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 23:29:49.108830 1460657 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1127 23:29:49.108847 1460657 start_flags.go:323] config:
	{Name:download-only-717158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-717158 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:29:49.110679 1460657 out.go:97] Starting control plane node download-only-717158 in cluster download-only-717158
	I1127 23:29:49.110702 1460657 cache.go:121] Beginning downloading kic base image for docker with crio
	I1127 23:29:49.112410 1460657 out.go:97] Pulling base image ...
	I1127 23:29:49.112433 1460657 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1127 23:29:49.112583 1460657 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 23:29:49.129824 1460657 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1127 23:29:49.130026 1460657 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory
	I1127 23:29:49.130126 1460657 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1127 23:29:49.191206 1460657 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1127 23:29:49.191228 1460657 cache.go:56] Caching tarball of preloaded images
	I1127 23:29:49.191885 1460657 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1127 23:29:49.194220 1460657 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1127 23:29:49.194246 1460657 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1127 23:29:49.311592 1460657 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1127 23:29:58.018968 1460657 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-717158"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (13.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-717158 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-717158 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.479054974s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (13.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-717158
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-717158: exit status 85 (89.261433ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-717158 | jenkins | v1.32.0 | 27 Nov 23 23:29 UTC |          |
	|         | -p download-only-717158        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-717158 | jenkins | v1.32.0 | 27 Nov 23 23:30 UTC |          |
	|         | -p download-only-717158        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 23:30:02
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 23:30:02.210709 1460731 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:30:02.210913 1460731 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:30:02.210941 1460731 out.go:309] Setting ErrFile to fd 2...
	I1127 23:30:02.210963 1460731 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:30:02.211242 1460731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-1455288/.minikube/bin
	W1127 23:30:02.211441 1460731 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17206-1455288/.minikube/config/config.json: open /home/jenkins/minikube-integration/17206-1455288/.minikube/config/config.json: no such file or directory
	I1127 23:30:02.211787 1460731 out.go:303] Setting JSON to true
	I1127 23:30:02.212886 1460731 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22352,"bootTime":1701105451,"procs":325,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1127 23:30:02.213032 1460731 start.go:138] virtualization:  
	I1127 23:30:02.215478 1460731 out.go:97] [download-only-717158] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1127 23:30:02.217485 1460731 out.go:169] MINIKUBE_LOCATION=17206
	I1127 23:30:02.215789 1460731 notify.go:220] Checking for updates...
	I1127 23:30:02.219972 1460731 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:30:02.222347 1460731 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17206-1455288/kubeconfig
	I1127 23:30:02.224307 1460731 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube
	I1127 23:30:02.226137 1460731 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1127 23:30:02.230148 1460731 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1127 23:30:02.230753 1460731 config.go:182] Loaded profile config "download-only-717158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1127 23:30:02.230808 1460731 start.go:810] api.Load failed for download-only-717158: filestore "download-only-717158": Docker machine "download-only-717158" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1127 23:30:02.230923 1460731 driver.go:378] Setting default libvirt URI to qemu:///system
	W1127 23:30:02.230955 1460731 start.go:810] api.Load failed for download-only-717158: filestore "download-only-717158": Docker machine "download-only-717158" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1127 23:30:02.256179 1460731 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:30:02.256287 1460731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:30:02.334614 1460731 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-11-27 23:30:02.323985587 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:30:02.334728 1460731 docker.go:295] overlay module found
	I1127 23:30:02.336817 1460731 out.go:97] Using the docker driver based on existing profile
	I1127 23:30:02.336857 1460731 start.go:298] selected driver: docker
	I1127 23:30:02.336865 1460731 start.go:902] validating driver "docker" against &{Name:download-only-717158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-717158 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:30:02.337061 1460731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:30:02.409686 1460731 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-11-27 23:30:02.399262996 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:30:02.410212 1460731 cni.go:84] Creating CNI manager for ""
	I1127 23:30:02.410232 1460731 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 23:30:02.410247 1460731 start_flags.go:323] config:
	{Name:download-only-717158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-717158 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I1127 23:30:02.412541 1460731 out.go:97] Starting control plane node download-only-717158 in cluster download-only-717158
	I1127 23:30:02.412566 1460731 cache.go:121] Beginning downloading kic base image for docker with crio
	I1127 23:30:02.414462 1460731 out.go:97] Pulling base image ...
	I1127 23:30:02.414496 1460731 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:30:02.414677 1460731 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 23:30:02.433117 1460731 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1127 23:30:02.433274 1460731 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory
	I1127 23:30:02.433298 1460731 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory, skipping pull
	I1127 23:30:02.433306 1460731 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in cache, skipping pull
	I1127 23:30:02.433315 1460731 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 as a tarball
	I1127 23:30:02.487939 1460731 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1127 23:30:02.487970 1460731 cache.go:56] Caching tarball of preloaded images
	I1127 23:30:02.488148 1460731 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:30:02.490352 1460731 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1127 23:30:02.490379 1460731 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 ...
	I1127 23:30:02.613323 1460731 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4?checksum=md5:23e2271fd1a7b32f52ce36ae8363c081 -> /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-717158"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/json-events (12.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-717158 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-717158 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.313987975s)
--- PASS: TestDownloadOnly/v1.29.0-rc.0/json-events (12.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/LogsDuration (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-717158
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-717158: exit status 85 (194.905975ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-717158 | jenkins | v1.32.0 | 27 Nov 23 23:29 UTC |          |
	|         | -p download-only-717158           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-717158 | jenkins | v1.32.0 | 27 Nov 23 23:30 UTC |          |
	|         | -p download-only-717158           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-717158 | jenkins | v1.32.0 | 27 Nov 23 23:30 UTC |          |
	|         | -p download-only-717158           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.0 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 23:30:15
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 23:30:15.775429 1460806 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:30:15.777152 1460806 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:30:15.777202 1460806 out.go:309] Setting ErrFile to fd 2...
	I1127 23:30:15.777222 1460806 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:30:15.777524 1460806 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-1455288/.minikube/bin
	W1127 23:30:15.777724 1460806 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17206-1455288/.minikube/config/config.json: open /home/jenkins/minikube-integration/17206-1455288/.minikube/config/config.json: no such file or directory
	I1127 23:30:15.778063 1460806 out.go:303] Setting JSON to true
	I1127 23:30:15.779128 1460806 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22365,"bootTime":1701105451,"procs":325,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1127 23:30:15.779235 1460806 start.go:138] virtualization:  
	I1127 23:30:15.781771 1460806 out.go:97] [download-only-717158] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1127 23:30:15.782166 1460806 notify.go:220] Checking for updates...
	I1127 23:30:15.785798 1460806 out.go:169] MINIKUBE_LOCATION=17206
	I1127 23:30:15.788337 1460806 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:30:15.790087 1460806 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17206-1455288/kubeconfig
	I1127 23:30:15.791797 1460806 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube
	I1127 23:30:15.793420 1460806 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1127 23:30:15.796740 1460806 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1127 23:30:15.797337 1460806 config.go:182] Loaded profile config "download-only-717158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W1127 23:30:15.797405 1460806 start.go:810] api.Load failed for download-only-717158: filestore "download-only-717158": Docker machine "download-only-717158" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1127 23:30:15.797507 1460806 driver.go:378] Setting default libvirt URI to qemu:///system
	W1127 23:30:15.797542 1460806 start.go:810] api.Load failed for download-only-717158: filestore "download-only-717158": Docker machine "download-only-717158" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1127 23:30:15.820998 1460806 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:30:15.821096 1460806 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:30:15.908179 1460806 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-11-27 23:30:15.898497648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:30:15.908281 1460806 docker.go:295] overlay module found
	I1127 23:30:15.910432 1460806 out.go:97] Using the docker driver based on existing profile
	I1127 23:30:15.910463 1460806 start.go:298] selected driver: docker
	I1127 23:30:15.910471 1460806 start.go:902] validating driver "docker" against &{Name:download-only-717158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-717158 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:30:15.910647 1460806 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:30:15.977326 1460806 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-11-27 23:30:15.967973795 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:30:15.977786 1460806 cni.go:84] Creating CNI manager for ""
	I1127 23:30:15.977808 1460806 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1127 23:30:15.977822 1460806 start_flags.go:323] config:
	{Name:download-only-717158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.0 ClusterName:download-only-717158 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0
s GPUs:}
	I1127 23:30:15.980008 1460806 out.go:97] Starting control plane node download-only-717158 in cluster download-only-717158
	I1127 23:30:15.980032 1460806 cache.go:121] Beginning downloading kic base image for docker with crio
	I1127 23:30:15.982139 1460806 out.go:97] Pulling base image ...
	I1127 23:30:15.982163 1460806 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.0 and runtime crio
	I1127 23:30:15.982355 1460806 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 23:30:15.999646 1460806 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1127 23:30:15.999802 1460806 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory
	I1127 23:30:15.999828 1460806 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory, skipping pull
	I1127 23:30:15.999834 1460806 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in cache, skipping pull
	I1127 23:30:15.999842 1460806 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 as a tarball
	I1127 23:30:16.055827 1460806 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.0/preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-arm64.tar.lz4
	I1127 23:30:16.055858 1460806 cache.go:56] Caching tarball of preloaded images
	I1127 23:30:16.056023 1460806 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.0 and runtime crio
	I1127 23:30:16.058068 1460806 out.go:97] Downloading Kubernetes v1.29.0-rc.0 preload ...
	I1127 23:30:16.058115 1460806 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-arm64.tar.lz4 ...
	I1127 23:30:16.174100 1460806 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.0/preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:2fbedfd2c2a9c642428164f4d73fb9c1 -> /home/jenkins/minikube-integration/17206-1455288/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-717158"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.0/LogsDuration (0.20s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.4s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.40s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-717158
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.26s)

                                                
                                    
x
+
TestBinaryMirror (0.68s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-434652 --alsologtostderr --binary-mirror http://127.0.0.1:33683 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-434652" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-434652
--- PASS: TestBinaryMirror (0.68s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-606180
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-606180: exit status 85 (94.005966ms)

                                                
                                                
-- stdout --
	* Profile "addons-606180" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-606180"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-606180
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-606180: exit status 85 (98.015145ms)

                                                
                                                
-- stdout --
	* Profile "addons-606180" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-606180"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (169.64s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-606180 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-606180 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m49.644513556s)
--- PASS: TestAddons/Setup (169.64s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 42.755659ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-pwpqm" [374cdeaf-b970-4ff9-bb78-cb2fc4f63693] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.019587739s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-hdjm6" [018a3f9c-a04e-43dd-b703-e873cea20fb0] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.015261846s
addons_test.go:339: (dbg) Run:  kubectl --context addons-606180 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-606180 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-606180 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.683473133s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p addons-606180 ip
2023/11/27 23:33:37 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-arm64 -p addons-606180 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.97s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.8s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dm99m" [faa4c2b7-97b8-4052-bc48-aa66297404da] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.015135563s
addons_test.go:840: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-606180
addons_test.go:840: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-606180: (5.785035404s)
--- PASS: TestAddons/parallel/InspektorGadget (10.80s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.86s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 4.508845ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-958nm" [1fbde937-4bc9-42bd-9d82-4139b59c9660] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.013076707s
addons_test.go:414: (dbg) Run:  kubectl --context addons-606180 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-arm64 -p addons-606180 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.86s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 43.082293ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-606180 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606180 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606180 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606180 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606180 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606180 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606180 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606180 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606180 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606180 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-606180 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [8aacc087-cf9e-43f4-8c9b-eb416cb0b2e0] Pending
helpers_test.go:344: "task-pv-pod" [8aacc087-cf9e-43f4-8c9b-eb416cb0b2e0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [8aacc087-cf9e-43f4-8c9b-eb416cb0b2e0] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.018380853s
addons_test.go:583: (dbg) Run:  kubectl --context addons-606180 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-606180 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-606180 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-606180 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-606180 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-606180 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-606180 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606180 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606180 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606180 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-606180 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [dfa15d98-068f-4763-87e2-a6c091d02f6b] Pending
helpers_test.go:344: "task-pv-pod-restore" [dfa15d98-068f-4763-87e2-a6c091d02f6b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [dfa15d98-068f-4763-87e2-a6c091d02f6b] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.018231573s
addons_test.go:625: (dbg) Run:  kubectl --context addons-606180 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-606180 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-606180 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-arm64 -p addons-606180 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-arm64 -p addons-606180 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.818719979s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-arm64 -p addons-606180 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (42.21s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-606180 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-606180 --alsologtostderr -v=1: (1.551147237s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-98cdp" [b623da3e-43ae-40f4-8b43-2a6361972c66] Pending
helpers_test.go:344: "headlamp-777fd4b855-98cdp" [b623da3e-43ae-40f4-8b43-2a6361972c66] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-98cdp" [b623da3e-43ae-40f4-8b43-2a6361972c66] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.030726492s
--- PASS: TestAddons/parallel/Headlamp (11.58s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-kw9qn" [ec31f28a-0c7c-421a-ae73-de55518b9640] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00997288s
addons_test.go:859: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-606180
--- PASS: TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.15s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-606180 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-606180 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606180 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606180 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606180 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606180 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606180 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-606180 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [2a32e855-82fc-4ef4-a3ad-e193dda9f227] Pending
helpers_test.go:344: "test-local-path" [2a32e855-82fc-4ef4-a3ad-e193dda9f227] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [2a32e855-82fc-4ef4-a3ad-e193dda9f227] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [2a32e855-82fc-4ef4-a3ad-e193dda9f227] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.01126256s
addons_test.go:890: (dbg) Run:  kubectl --context addons-606180 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-arm64 -p addons-606180 ssh "cat /opt/local-path-provisioner/pvc-92fea970-b462-46c9-a754-48da8038f828_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-606180 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-606180 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-arm64 -p addons-606180 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-arm64 -p addons-606180 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.553470623s)
--- PASS: TestAddons/parallel/LocalPath (53.15s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.77s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-g52cs" [36e3cc61-cb27-466b-b53f-7c52daf2d850] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.020943512s
addons_test.go:954: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-606180
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.77s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-606180 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-606180 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.35s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-606180
addons_test.go:171: (dbg) Done: out/minikube-linux-arm64 stop -p addons-606180: (12.027910879s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-606180
addons_test.go:179: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-606180
addons_test.go:184: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-606180
--- PASS: TestAddons/StoppedEnableDisable (12.35s)

                                                
                                    
x
+
TestCertOptions (41.82s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-851884 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1128 00:22:26.164177 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-851884 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (38.518323275s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-851884 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-851884 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-851884 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-851884" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-851884
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-851884: (2.283370153s)
--- PASS: TestCertOptions (41.82s)

                                                
                                    
x
+
TestCertExpiration (235.59s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-778355 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-778355 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (33.30234963s)
E1128 00:19:23.120139 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
E1128 00:19:36.207063 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
E1128 00:21:33.164058 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-778355 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-778355 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (19.752612722s)
helpers_test.go:175: Cleaning up "cert-expiration-778355" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-778355
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-778355: (2.528660105s)
--- PASS: TestCertExpiration (235.59s)

                                                
                                    
x
+
TestForceSystemdFlag (36.39s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-190879 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-190879 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.96445353s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-190879 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-190879" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-190879
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-190879: (2.116789837s)
--- PASS: TestForceSystemdFlag (36.39s)

                                                
                                    
x
+
TestForceSystemdEnv (39.73s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-756179 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1128 00:18:20.411365 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-756179 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.265086611s)
helpers_test.go:175: Cleaning up "force-systemd-env-756179" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-756179
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-756179: (2.467611335s)
--- PASS: TestForceSystemdEnv (39.73s)

                                                
                                    
x
+
TestErrorSpam/setup (32.04s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-612516 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-612516 --driver=docker  --container-runtime=crio
E1127 23:38:20.412600 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
E1127 23:38:20.419376 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
E1127 23:38:20.429590 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
E1127 23:38:20.449896 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
E1127 23:38:20.490166 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
E1127 23:38:20.570429 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
E1127 23:38:20.730768 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
E1127 23:38:21.051336 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
E1127 23:38:21.692284 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
E1127 23:38:22.972506 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
E1127 23:38:25.532710 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
E1127 23:38:30.653821 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-612516 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-612516 --driver=docker  --container-runtime=crio: (32.040512915s)
--- PASS: TestErrorSpam/setup (32.04s)

                                                
                                    
x
+
TestErrorSpam/start (0.9s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-612516 --log_dir /tmp/nospam-612516 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-612516 --log_dir /tmp/nospam-612516 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-612516 --log_dir /tmp/nospam-612516 start --dry-run
--- PASS: TestErrorSpam/start (0.90s)

                                                
                                    
x
+
TestErrorSpam/status (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-612516 --log_dir /tmp/nospam-612516 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-612516 --log_dir /tmp/nospam-612516 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-612516 --log_dir /tmp/nospam-612516 status
--- PASS: TestErrorSpam/status (1.16s)

                                                
                                    
x
+
TestErrorSpam/pause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-612516 --log_dir /tmp/nospam-612516 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-612516 --log_dir /tmp/nospam-612516 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-612516 --log_dir /tmp/nospam-612516 pause
--- PASS: TestErrorSpam/pause (1.87s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.09s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-612516 --log_dir /tmp/nospam-612516 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-612516 --log_dir /tmp/nospam-612516 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-612516 --log_dir /tmp/nospam-612516 unpause
--- PASS: TestErrorSpam/unpause (2.09s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-612516 --log_dir /tmp/nospam-612516 stop
E1127 23:38:40.893993 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-612516 --log_dir /tmp/nospam-612516 stop: (1.243412156s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-612516 --log_dir /tmp/nospam-612516 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-612516 --log_dir /tmp/nospam-612516 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17206-1455288/.minikube/files/etc/test/nested/copy/1460652/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.3s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-428453 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1127 23:39:01.375075 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
E1127 23:39:42.336285 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-428453 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m15.298980494s)
--- PASS: TestFunctional/serial/StartWithProxy (75.30s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.86s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-428453 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-428453 --alsologtostderr -v=8: (34.850464172s)
functional_test.go:659: soft start took 34.857555765s for "functional-428453" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.86s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-428453 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-428453 cache add registry.k8s.io/pause:3.1: (1.301663441s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-428453 cache add registry.k8s.io/pause:3.3: (1.345827059s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-428453 cache add registry.k8s.io/pause:latest: (1.303222489s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-428453 /tmp/TestFunctionalserialCacheCmdcacheadd_local933618143/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 cache add minikube-local-cache-test:functional-428453
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 cache delete minikube-local-cache-test:functional-428453
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-428453
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-428453 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (357.445829ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-428453 cache reload: (1.133814654s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 kubectl -- --context functional-428453 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-428453 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.17s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-428453 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1127 23:41:04.257999 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-428453 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.166233448s)
functional_test.go:757: restart took 37.166328126s for "functional-428453" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.17s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-428453 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-428453 logs: (1.868118448s)
--- PASS: TestFunctional/serial/LogsCmd (1.87s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 logs --file /tmp/TestFunctionalserialLogsFileCmd360667920/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-428453 logs --file /tmp/TestFunctionalserialLogsFileCmd360667920/001/logs.txt: (1.858975166s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.86s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.43s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-428453 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-428453
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-428453: exit status 115 (720.893762ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30662 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-428453 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.43s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-428453 config get cpus: exit status 14 (127.36944ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-428453 config get cpus: exit status 14 (122.115103ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-428453 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-428453 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1486518: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.07s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-428453 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-428453 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (249.535744ms)

                                                
                                                
-- stdout --
	* [functional-428453] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-1455288/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 23:42:05.797588 1486013 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:42:05.797834 1486013 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:42:05.797893 1486013 out.go:309] Setting ErrFile to fd 2...
	I1127 23:42:05.797913 1486013 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:42:05.798238 1486013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-1455288/.minikube/bin
	I1127 23:42:05.798667 1486013 out.go:303] Setting JSON to false
	I1127 23:42:05.802985 1486013 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23075,"bootTime":1701105451,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1127 23:42:05.803181 1486013 start.go:138] virtualization:  
	I1127 23:42:05.805621 1486013 out.go:177] * [functional-428453] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1127 23:42:05.807760 1486013 out.go:177]   - MINIKUBE_LOCATION=17206
	I1127 23:42:05.809763 1486013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:42:05.807910 1486013 notify.go:220] Checking for updates...
	I1127 23:42:05.813163 1486013 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-1455288/kubeconfig
	I1127 23:42:05.815353 1486013 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube
	I1127 23:42:05.817085 1486013 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1127 23:42:05.818817 1486013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 23:42:05.821276 1486013 config.go:182] Loaded profile config "functional-428453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:42:05.821820 1486013 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:42:05.867128 1486013 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:42:05.867243 1486013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:42:05.961259 1486013 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:46 SystemTime:2023-11-27 23:42:05.944570881 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:42:05.961412 1486013 docker.go:295] overlay module found
	I1127 23:42:05.970491 1486013 out.go:177] * Using the docker driver based on existing profile
	I1127 23:42:05.972192 1486013 start.go:298] selected driver: docker
	I1127 23:42:05.972216 1486013 start.go:902] validating driver "docker" against &{Name:functional-428453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-428453 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:42:05.972332 1486013 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 23:42:05.974681 1486013 out.go:177] 
	W1127 23:42:05.976364 1486013 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1127 23:42:05.977957 1486013 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-428453 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-428453 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-428453 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (220.779216ms)

                                                
                                                
-- stdout --
	* [functional-428453] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-1455288/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 23:42:05.589812 1485974 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:42:05.590083 1485974 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:42:05.590111 1485974 out.go:309] Setting ErrFile to fd 2...
	I1127 23:42:05.590131 1485974 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:42:05.592238 1485974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-1455288/.minikube/bin
	I1127 23:42:05.592711 1485974 out.go:303] Setting JSON to false
	I1127 23:42:05.593823 1485974 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23075,"bootTime":1701105451,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1127 23:42:05.593950 1485974 start.go:138] virtualization:  
	I1127 23:42:05.596703 1485974 out.go:177] * [functional-428453] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I1127 23:42:05.599043 1485974 notify.go:220] Checking for updates...
	I1127 23:42:05.601650 1485974 out.go:177]   - MINIKUBE_LOCATION=17206
	I1127 23:42:05.603818 1485974 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:42:05.605390 1485974 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-1455288/kubeconfig
	I1127 23:42:05.607492 1485974 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube
	I1127 23:42:05.609240 1485974 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1127 23:42:05.611083 1485974 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 23:42:05.613498 1485974 config.go:182] Loaded profile config "functional-428453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:42:05.614080 1485974 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:42:05.639311 1485974 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:42:05.639429 1485974 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:42:05.717559 1485974 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:46 SystemTime:2023-11-27 23:42:05.707485158 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:42:05.717657 1485974 docker.go:295] overlay module found
	I1127 23:42:05.719944 1485974 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1127 23:42:05.722358 1485974 start.go:298] selected driver: docker
	I1127 23:42:05.722376 1485974 start.go:902] validating driver "docker" against &{Name:functional-428453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-428453 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:42:05.722468 1485974 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 23:42:05.724557 1485974 out.go:177] 
	W1127 23:42:05.726707 1485974 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1127 23:42:05.728567 1485974 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-428453 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-428453 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-6l4s5" [2f066a97-ab2f-45e7-b24b-aff84e9001b6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-6l4s5" [2f066a97-ab2f-45e7-b24b-aff84e9001b6] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.02396897s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30491
functional_test.go:1674: http://192.168.49.2:30491: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-6l4s5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30491
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a708da06-ddd5-4015-8383-172d1ddf7932] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.042030695s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-428453 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-428453 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-428453 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-428453 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [951de07d-e98c-4473-9bf4-65fedd0d9ac2] Pending
helpers_test.go:344: "sp-pod" [951de07d-e98c-4473-9bf4-65fedd0d9ac2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [951de07d-e98c-4473-9bf4-65fedd0d9ac2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.030978984s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-428453 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-428453 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-428453 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8bd825dc-3d92-4bd9-94d2-c5a8359c6ec2] Pending
helpers_test.go:344: "sp-pod" [8bd825dc-3d92-4bd9-94d2-c5a8359c6ec2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8bd825dc-3d92-4bd9-94d2-c5a8359c6ec2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.019716785s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-428453 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.08s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh -n functional-428453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 cp functional-428453:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3079440841/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh -n functional-428453 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1460652/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh "sudo cat /etc/test/nested/copy/1460652/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1460652.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh "sudo cat /etc/ssl/certs/1460652.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1460652.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh "sudo cat /usr/share/ca-certificates/1460652.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/14606522.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh "sudo cat /etc/ssl/certs/14606522.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/14606522.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh "sudo cat /usr/share/ca-certificates/14606522.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-428453 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-428453 ssh "sudo systemctl is-active docker": exit status 1 (432.317017ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-428453 ssh "sudo systemctl is-active containerd": exit status 1 (457.424811ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-428453 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-428453 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-428453 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1484026: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-428453 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-428453 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-428453 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [7c067f1f-0a39-4a39-9d0b-c84a1df7a461] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [7c067f1f-0a39-4a39-9d0b-c84a1df7a461] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.015874627s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-428453 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.103.114 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-428453 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1484507: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-428453 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-428453 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-2btsb" [e404303b-e63b-439c-8f14-94246d521f29] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-2btsb" [e404303b-e63b-439c-8f14-94246d521f29] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.012720033s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "351.566661ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "76.211142ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "364.617972ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "82.761803ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-428453 /tmp/TestFunctionalparallelMountCmdany-port2427383418/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1701128519570670848" to /tmp/TestFunctionalparallelMountCmdany-port2427383418/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1701128519570670848" to /tmp/TestFunctionalparallelMountCmdany-port2427383418/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1701128519570670848" to /tmp/TestFunctionalparallelMountCmdany-port2427383418/001/test-1701128519570670848
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-428453 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (487.956028ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 27 23:41 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 27 23:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 27 23:41 test-1701128519570670848
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh cat /mount-9p/test-1701128519570670848
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-428453 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [65264c28-dd57-4242-8f46-45b58a4c9bd0] Pending
helpers_test.go:344: "busybox-mount" [65264c28-dd57-4242-8f46-45b58a4c9bd0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [65264c28-dd57-4242-8f46-45b58a4c9bd0] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [65264c28-dd57-4242-8f46-45b58a4c9bd0] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.021747649s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-428453 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-428453 /tmp/TestFunctionalparallelMountCmdany-port2427383418/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 service list -o json
functional_test.go:1493: Took "571.0195ms" to run "out/minikube-linux-arm64 -p functional-428453 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30796
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30796
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-428453 /tmp/TestFunctionalparallelMountCmdspecific-port2277064063/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-428453 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (679.086099ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-428453 /tmp/TestFunctionalparallelMountCmdspecific-port2277064063/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-428453 ssh "sudo umount -f /mount-9p": exit status 1 (357.209827ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-428453 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-428453 /tmp/TestFunctionalparallelMountCmdspecific-port2277064063/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-428453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1745741328/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-428453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1745741328/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-428453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1745741328/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-428453 ssh "findmnt -T" /mount1: (1.209065134s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-428453 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-428453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1745741328/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-428453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1745741328/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-428453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1745741328/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-428453 version -o=json --components: (1.250184297s)
--- PASS: TestFunctional/parallel/Version/components (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-428453 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-428453
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-428453 image ls --format short --alsologtostderr:
I1127 23:42:35.337273 1488470 out.go:296] Setting OutFile to fd 1 ...
I1127 23:42:35.337534 1488470 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:42:35.337564 1488470 out.go:309] Setting ErrFile to fd 2...
I1127 23:42:35.337585 1488470 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:42:35.337918 1488470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-1455288/.minikube/bin
I1127 23:42:35.338649 1488470 config.go:182] Loaded profile config "functional-428453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:42:35.338821 1488470 config.go:182] Loaded profile config "functional-428453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:42:35.339360 1488470 cli_runner.go:164] Run: docker container inspect functional-428453 --format={{.State.Status}}
I1127 23:42:35.363573 1488470 ssh_runner.go:195] Run: systemctl --version
I1127 23:42:35.363629 1488470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-428453
I1127 23:42:35.385940 1488470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34079 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/functional-428453/id_rsa Username:docker}
I1127 23:42:35.483887 1488470 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-428453 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/google-containers/addon-resizer  | functional-428453  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| docker.io/library/nginx                 | alpine             | aae348c9fbd40 | 50.2MB |
| docker.io/library/nginx                 | latest             | 5628e5ea3c17f | 196MB  |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 04b4c447bb9d4 | 121MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | 9961cbceaf234 | 117MB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-proxy              | v1.28.4            | 3ca3ca488cf13 | 70MB   |
| registry.k8s.io/kube-scheduler          | v1.28.4            | 05c284c929889 | 59.3MB |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-428453 image ls --format table --alsologtostderr:
I1127 23:42:36.021260 1488604 out.go:296] Setting OutFile to fd 1 ...
I1127 23:42:36.021485 1488604 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:42:36.021497 1488604 out.go:309] Setting ErrFile to fd 2...
I1127 23:42:36.021504 1488604 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:42:36.021776 1488604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-1455288/.minikube/bin
I1127 23:42:36.022802 1488604 config.go:182] Loaded profile config "functional-428453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:42:36.022937 1488604 config.go:182] Loaded profile config "functional-428453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:42:36.023509 1488604 cli_runner.go:164] Run: docker container inspect functional-428453 --format={{.State.Status}}
I1127 23:42:36.051565 1488604 ssh_runner.go:195] Run: systemctl --version
I1127 23:42:36.051635 1488604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-428453
I1127 23:42:36.084907 1488604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34079 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/functional-428453/id_rsa Username:docker}
I1127 23:42:36.188638 1488604 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-428453 image ls --format json --alsologtostderr:
[{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9de
a45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"117252916"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68","
registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"69992343"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb","registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"121119694"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha
256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-428453"],"size":"34114467"},{"id":"a422e0e982
356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"aae348c9fbd40035f9fc24e2c9ccb9ac0a8977a3f3441a997bb40f6011d45e9b","repoDigests":["docker.io/library/nginx@sha256:b7537eea6ffa4f00aac311f16654b50736328eb370208c68b6649a97b7a2724b","docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77"],"repoTags":["docker.io/library/nginx:alpine"],"size":"50212152"},{"id":"5628e5ea3c17fa1cbf49692edf41d5a1cdf198922898e6ffb29c19768dca8fd3","repoDigests":["docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee","docker.io/library/nginx@sha256:736342e81e97220f954b8c33846ba80d2d95f59b30225a5c63d063c8b250b0ab"],"repoTags":["docker.io/library/nginx:latest"],"size":"1962
11465"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"59253556"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2
460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-428453 image ls --format json --alsologtostderr:
I1127 23:42:35.674955 1488532 out.go:296] Setting OutFile to fd 1 ...
I1127 23:42:35.675237 1488532 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:42:35.675263 1488532 out.go:309] Setting ErrFile to fd 2...
I1127 23:42:35.675284 1488532 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:42:35.675560 1488532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-1455288/.minikube/bin
I1127 23:42:35.676774 1488532 config.go:182] Loaded profile config "functional-428453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:42:35.677808 1488532 config.go:182] Loaded profile config "functional-428453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:42:35.678489 1488532 cli_runner.go:164] Run: docker container inspect functional-428453 --format={{.State.Status}}
I1127 23:42:35.705995 1488532 ssh_runner.go:195] Run: systemctl --version
I1127 23:42:35.706052 1488532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-428453
I1127 23:42:35.758181 1488532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34079 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/functional-428453/id_rsa Username:docker}
I1127 23:42:35.860176 1488532 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-428453 image ls --format yaml --alsologtostderr:
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: aae348c9fbd40035f9fc24e2c9ccb9ac0a8977a3f3441a997bb40f6011d45e9b
repoDigests:
- docker.io/library/nginx@sha256:b7537eea6ffa4f00aac311f16654b50736328eb370208c68b6649a97b7a2724b
- docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77
repoTags:
- docker.io/library/nginx:alpine
size: "50212152"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-428453
size: "34114467"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
- registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "121119694"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "69992343"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "117252916"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "59253556"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 5628e5ea3c17fa1cbf49692edf41d5a1cdf198922898e6ffb29c19768dca8fd3
repoDigests:
- docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee
- docker.io/library/nginx@sha256:736342e81e97220f954b8c33846ba80d2d95f59b30225a5c63d063c8b250b0ab
repoTags:
- docker.io/library/nginx:latest
size: "196211465"
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-428453 image ls --format yaml --alsologtostderr:
I1127 23:42:35.350618 1488471 out.go:296] Setting OutFile to fd 1 ...
I1127 23:42:35.350746 1488471 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:42:35.350755 1488471 out.go:309] Setting ErrFile to fd 2...
I1127 23:42:35.350761 1488471 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:42:35.351022 1488471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-1455288/.minikube/bin
I1127 23:42:35.351793 1488471 config.go:182] Loaded profile config "functional-428453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:42:35.351972 1488471 config.go:182] Loaded profile config "functional-428453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:42:35.352505 1488471 cli_runner.go:164] Run: docker container inspect functional-428453 --format={{.State.Status}}
I1127 23:42:35.380177 1488471 ssh_runner.go:195] Run: systemctl --version
I1127 23:42:35.380228 1488471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-428453
I1127 23:42:35.405335 1488471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34079 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/functional-428453/id_rsa Username:docker}
I1127 23:42:35.504685 1488471 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-428453 ssh pgrep buildkitd: exit status 1 (394.673869ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 image build -t localhost/my-image:functional-428453 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-428453 image build -t localhost/my-image:functional-428453 testdata/build --alsologtostderr: (2.553299149s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-428453 image build -t localhost/my-image:functional-428453 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b9753c98279
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-428453
--> ae54c5aa9e3
Successfully tagged localhost/my-image:functional-428453
ae54c5aa9e3581df354e0e7669a91e5b6b0dff00a1c3621e377ff60eeb8c3bf2
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-428453 image build -t localhost/my-image:functional-428453 testdata/build --alsologtostderr:
I1127 23:42:36.038336 1488610 out.go:296] Setting OutFile to fd 1 ...
I1127 23:42:36.041297 1488610 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:42:36.041369 1488610 out.go:309] Setting ErrFile to fd 2...
I1127 23:42:36.041393 1488610 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:42:36.041753 1488610 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-1455288/.minikube/bin
I1127 23:42:36.043402 1488610 config.go:182] Loaded profile config "functional-428453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:42:36.046127 1488610 config.go:182] Loaded profile config "functional-428453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:42:36.046814 1488610 cli_runner.go:164] Run: docker container inspect functional-428453 --format={{.State.Status}}
I1127 23:42:36.071126 1488610 ssh_runner.go:195] Run: systemctl --version
I1127 23:42:36.071181 1488610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-428453
I1127 23:42:36.102256 1488610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34079 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/functional-428453/id_rsa Username:docker}
I1127 23:42:36.201478 1488610 build_images.go:151] Building image from path: /tmp/build.1370083367.tar
I1127 23:42:36.201600 1488610 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1127 23:42:36.216654 1488610 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1370083367.tar
I1127 23:42:36.222250 1488610 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1370083367.tar: stat -c "%s %y" /var/lib/minikube/build/build.1370083367.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1370083367.tar': No such file or directory
I1127 23:42:36.222293 1488610 ssh_runner.go:362] scp /tmp/build.1370083367.tar --> /var/lib/minikube/build/build.1370083367.tar (3072 bytes)
I1127 23:42:36.258112 1488610 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1370083367
I1127 23:42:36.283783 1488610 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1370083367 -xf /var/lib/minikube/build/build.1370083367.tar
I1127 23:42:36.298396 1488610 crio.go:297] Building image: /var/lib/minikube/build/build.1370083367
I1127 23:42:36.298477 1488610 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-428453 /var/lib/minikube/build/build.1370083367 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1127 23:42:38.463042 1488610 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-428453 /var/lib/minikube/build/build.1370083367 --cgroup-manager=cgroupfs: (2.164534215s)
I1127 23:42:38.463112 1488610 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1370083367
I1127 23:42:38.474060 1488610 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1370083367.tar
I1127 23:42:38.484693 1488610 build_images.go:207] Built localhost/my-image:functional-428453 from /tmp/build.1370083367.tar
I1127 23:42:38.484729 1488610 build_images.go:123] succeeded building to: functional-428453
I1127 23:42:38.484734 1488610 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.537585264s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-428453
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 image load --daemon gcr.io/google-containers/addon-resizer:functional-428453 --alsologtostderr
2023/11/27 23:42:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-428453 image load --daemon gcr.io/google-containers/addon-resizer:functional-428453 --alsologtostderr: (4.717275593s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 image load --daemon gcr.io/google-containers/addon-resizer:functional-428453 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-428453 image load --daemon gcr.io/google-containers/addon-resizer:functional-428453 --alsologtostderr: (2.990187928s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.967673494s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-428453
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 image load --daemon gcr.io/google-containers/addon-resizer:functional-428453 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-428453 image load --daemon gcr.io/google-containers/addon-resizer:functional-428453 --alsologtostderr: (3.6243113s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 image save gcr.io/google-containers/addon-resizer:functional-428453 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 image rm gcr.io/google-containers/addon-resizer:functional-428453 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-428453 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.059733354s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-428453
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-428453 image save --daemon gcr.io/google-containers/addon-resizer:functional-428453 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-428453
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.96s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-428453
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-428453
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-428453
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (88.59s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-684553 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1127 23:43:20.411599 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
E1127 23:43:48.099162 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-684553 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m28.584964875s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (88.59s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.06s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-684553 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-684553 addons enable ingress --alsologtostderr -v=5: (12.057312268s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.06s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.65s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-684553 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.65s)

                                                
                                    
x
+
TestJSONOutput/start/Command (75.62s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-732623 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1127 23:47:55.085970 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
E1127 23:48:20.411257 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-732623 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m15.613130485s)
--- PASS: TestJSONOutput/start/Command (75.62s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.81s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-732623 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.81s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-732623 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.91s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-732623 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-732623 --output=json --user=testUser: (5.913361904s)
--- PASS: TestJSONOutput/stop/Command (5.91s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-689929 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-689929 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (100.736712ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fd9f6f54-8ac8-4d77-989a-cba397f40633","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-689929] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c920b31e-ccb1-4c55-9859-fdc6efcdf266","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17206"}}
	{"specversion":"1.0","id":"03d3d663-322f-46f8-a0d3-1b6ba49b2c35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cf6034cc-01f8-4f47-a339-5f18afb1eec2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17206-1455288/kubeconfig"}}
	{"specversion":"1.0","id":"75e381e1-4bf4-46d2-9ab3-99e6d8b1e4a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube"}}
	{"specversion":"1.0","id":"4d51d859-ee0b-4e77-b148-1812e7cd2853","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"0f0c8031-82d3-45d1-adbe-be175e3dd0a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"70948f57-f7ae-43aa-b3d7-c29eb856fc8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-689929" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-689929
--- PASS: TestErrorJSONOutput (0.27s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (50.15s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-091243 --network=
E1127 23:49:17.006245 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
E1127 23:49:23.120257 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
E1127 23:49:23.125514 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
E1127 23:49:23.135774 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
E1127 23:49:23.156025 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
E1127 23:49:23.196498 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
E1127 23:49:23.276830 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
E1127 23:49:23.437190 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
E1127 23:49:23.757599 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
E1127 23:49:24.398123 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
E1127 23:49:25.678338 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
E1127 23:49:28.238627 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
E1127 23:49:33.359607 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
E1127 23:49:43.599779 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-091243 --network=: (47.905866656s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-091243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-091243
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-091243: (2.226072186s)
--- PASS: TestKicCustomNetwork/create_custom_network (50.15s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.05s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-864973 --network=bridge
E1127 23:50:04.079995 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-864973 --network=bridge: (35.010255906s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-864973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-864973
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-864973: (2.012368019s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.05s)

                                                
                                    
x
+
TestKicExistingNetwork (37.28s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-945716 --network=existing-network
E1127 23:50:45.040275 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-945716 --network=existing-network: (34.907943967s)
helpers_test.go:175: Cleaning up "existing-network-945716" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-945716
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-945716: (2.210535965s)
--- PASS: TestKicExistingNetwork (37.28s)

                                                
                                    
x
+
TestKicCustomSubnet (37.49s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-894462 --subnet=192.168.60.0/24
E1127 23:51:33.163855 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-894462 --subnet=192.168.60.0/24: (35.332842587s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-894462 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-894462" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-894462
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-894462: (2.125903393s)
--- PASS: TestKicCustomSubnet (37.49s)

                                                
                                    
x
+
TestKicStaticIP (33.69s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-692670 --static-ip=192.168.200.200
E1127 23:52:00.846469 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
E1127 23:52:06.961289 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-692670 --static-ip=192.168.200.200: (31.390114112s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-692670 ip
helpers_test.go:175: Cleaning up "static-ip-692670" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-692670
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-692670: (2.108161873s)
--- PASS: TestKicStaticIP (33.69s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (69.49s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-877436 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-877436 --driver=docker  --container-runtime=crio: (30.341627578s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-880776 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-880776 --driver=docker  --container-runtime=crio: (33.692725873s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-877436
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-880776
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-880776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-880776
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-880776: (2.090831206s)
helpers_test.go:175: Cleaning up "first-877436" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-877436
E1127 23:53:20.411605 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-877436: (2.069230727s)
--- PASS: TestMinikubeProfile (69.49s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-127843 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-127843 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.460232472s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-127843 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-129599 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-129599 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.999806949s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-129599 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-127843 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-127843 --alsologtostderr -v=5: (1.666996444s)
--- PASS: TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-129599 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-129599
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-129599: (1.232517635s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-129599
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-129599: (6.995756428s)
--- PASS: TestMountStart/serial/RestartStopped (8.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-129599 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (129.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-784312 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1127 23:54:23.120360 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
E1127 23:54:43.460273 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
E1127 23:54:50.802146 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-784312 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m8.785998874s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (129.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-784312 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-784312 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-784312 -- rollout status deployment/busybox: (3.400185106s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-784312 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-784312 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-784312 -- exec busybox-5bc68d56bd-cls7b -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-784312 -- exec busybox-5bc68d56bd-dmvq4 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-784312 -- exec busybox-5bc68d56bd-cls7b -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-784312 -- exec busybox-5bc68d56bd-dmvq4 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-784312 -- exec busybox-5bc68d56bd-cls7b -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-784312 -- exec busybox-5bc68d56bd-dmvq4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.66s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-784312 -v 3 --alsologtostderr
E1127 23:56:33.163485 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-784312 -v 3 --alsologtostderr: (50.108728482s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.83s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 cp testdata/cp-test.txt multinode-784312:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 ssh -n multinode-784312 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 cp multinode-784312:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1682404390/001/cp-test_multinode-784312.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 ssh -n multinode-784312 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 cp multinode-784312:/home/docker/cp-test.txt multinode-784312-m02:/home/docker/cp-test_multinode-784312_multinode-784312-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 ssh -n multinode-784312 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 ssh -n multinode-784312-m02 "sudo cat /home/docker/cp-test_multinode-784312_multinode-784312-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 cp multinode-784312:/home/docker/cp-test.txt multinode-784312-m03:/home/docker/cp-test_multinode-784312_multinode-784312-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 ssh -n multinode-784312 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 ssh -n multinode-784312-m03 "sudo cat /home/docker/cp-test_multinode-784312_multinode-784312-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 cp testdata/cp-test.txt multinode-784312-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 ssh -n multinode-784312-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 cp multinode-784312-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1682404390/001/cp-test_multinode-784312-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 ssh -n multinode-784312-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 cp multinode-784312-m02:/home/docker/cp-test.txt multinode-784312:/home/docker/cp-test_multinode-784312-m02_multinode-784312.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 ssh -n multinode-784312-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 ssh -n multinode-784312 "sudo cat /home/docker/cp-test_multinode-784312-m02_multinode-784312.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 cp multinode-784312-m02:/home/docker/cp-test.txt multinode-784312-m03:/home/docker/cp-test_multinode-784312-m02_multinode-784312-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 ssh -n multinode-784312-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 ssh -n multinode-784312-m03 "sudo cat /home/docker/cp-test_multinode-784312-m02_multinode-784312-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 cp testdata/cp-test.txt multinode-784312-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 ssh -n multinode-784312-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 cp multinode-784312-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1682404390/001/cp-test_multinode-784312-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 ssh -n multinode-784312-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 cp multinode-784312-m03:/home/docker/cp-test.txt multinode-784312:/home/docker/cp-test_multinode-784312-m03_multinode-784312.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 ssh -n multinode-784312-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 ssh -n multinode-784312 "sudo cat /home/docker/cp-test_multinode-784312-m03_multinode-784312.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 cp multinode-784312-m03:/home/docker/cp-test.txt multinode-784312-m02:/home/docker/cp-test_multinode-784312-m03_multinode-784312-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 ssh -n multinode-784312-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 ssh -n multinode-784312-m02 "sudo cat /home/docker/cp-test_multinode-784312-m03_multinode-784312-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-784312 node stop m03: (1.275455345s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-784312 status: exit status 7 (591.91667ms)

                                                
                                                
-- stdout --
	multinode-784312
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-784312-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-784312-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-784312 status --alsologtostderr: exit status 7 (597.152411ms)

                                                
                                                
-- stdout --
	multinode-784312
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-784312-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-784312-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 23:57:15.485834 1535289 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:57:15.486119 1535289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:57:15.486148 1535289 out.go:309] Setting ErrFile to fd 2...
	I1127 23:57:15.486168 1535289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:57:15.486451 1535289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-1455288/.minikube/bin
	I1127 23:57:15.486667 1535289 out.go:303] Setting JSON to false
	I1127 23:57:15.486841 1535289 notify.go:220] Checking for updates...
	I1127 23:57:15.487971 1535289 mustload.go:65] Loading cluster: multinode-784312
	I1127 23:57:15.490055 1535289 config.go:182] Loaded profile config "multinode-784312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:57:15.490109 1535289 status.go:255] checking status of multinode-784312 ...
	I1127 23:57:15.490647 1535289 cli_runner.go:164] Run: docker container inspect multinode-784312 --format={{.State.Status}}
	I1127 23:57:15.509380 1535289 status.go:330] multinode-784312 host status = "Running" (err=<nil>)
	I1127 23:57:15.509402 1535289 host.go:66] Checking if "multinode-784312" exists ...
	I1127 23:57:15.509712 1535289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-784312
	I1127 23:57:15.529067 1535289 host.go:66] Checking if "multinode-784312" exists ...
	I1127 23:57:15.529356 1535289 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 23:57:15.529405 1535289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-784312
	I1127 23:57:15.559300 1535289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34144 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/multinode-784312/id_rsa Username:docker}
	I1127 23:57:15.661219 1535289 ssh_runner.go:195] Run: systemctl --version
	I1127 23:57:15.666970 1535289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:57:15.682846 1535289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:57:15.756913 1535289 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-11-27 23:57:15.747059302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:57:15.757654 1535289 kubeconfig.go:92] found "multinode-784312" server: "https://192.168.58.2:8443"
	I1127 23:57:15.757678 1535289 api_server.go:166] Checking apiserver status ...
	I1127 23:57:15.757722 1535289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 23:57:15.770395 1535289 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1277/cgroup
	I1127 23:57:15.781486 1535289 api_server.go:182] apiserver freezer: "5:freezer:/docker/6fd8b66557924c6a255350f8721ffd2110fc0027983ca6688ef0474215f93244/crio/crio-63f1099897213983b5ce0a7bf940993d50b2f562ae391d142e087dc375141ece"
	I1127 23:57:15.781560 1535289 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6fd8b66557924c6a255350f8721ffd2110fc0027983ca6688ef0474215f93244/crio/crio-63f1099897213983b5ce0a7bf940993d50b2f562ae391d142e087dc375141ece/freezer.state
	I1127 23:57:15.792106 1535289 api_server.go:204] freezer state: "THAWED"
	I1127 23:57:15.792133 1535289 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1127 23:57:15.801142 1535289 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1127 23:57:15.801170 1535289 status.go:421] multinode-784312 apiserver status = Running (err=<nil>)
	I1127 23:57:15.801181 1535289 status.go:257] multinode-784312 status: &{Name:multinode-784312 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1127 23:57:15.801198 1535289 status.go:255] checking status of multinode-784312-m02 ...
	I1127 23:57:15.801510 1535289 cli_runner.go:164] Run: docker container inspect multinode-784312-m02 --format={{.State.Status}}
	I1127 23:57:15.832137 1535289 status.go:330] multinode-784312-m02 host status = "Running" (err=<nil>)
	I1127 23:57:15.832165 1535289 host.go:66] Checking if "multinode-784312-m02" exists ...
	I1127 23:57:15.832462 1535289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-784312-m02
	I1127 23:57:15.855256 1535289 host.go:66] Checking if "multinode-784312-m02" exists ...
	I1127 23:57:15.855570 1535289 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 23:57:15.855612 1535289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-784312-m02
	I1127 23:57:15.874594 1535289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34149 SSHKeyPath:/home/jenkins/minikube-integration/17206-1455288/.minikube/machines/multinode-784312-m02/id_rsa Username:docker}
	I1127 23:57:15.964262 1535289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:57:15.978372 1535289 status.go:257] multinode-784312-m02 status: &{Name:multinode-784312-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1127 23:57:15.978405 1535289 status.go:255] checking status of multinode-784312-m03 ...
	I1127 23:57:15.978706 1535289 cli_runner.go:164] Run: docker container inspect multinode-784312-m03 --format={{.State.Status}}
	I1127 23:57:16.015697 1535289 status.go:330] multinode-784312-m03 host status = "Stopped" (err=<nil>)
	I1127 23:57:16.015720 1535289 status.go:343] host is not running, skipping remaining checks
	I1127 23:57:16.015728 1535289 status.go:257] multinode-784312-m03 status: &{Name:multinode-784312-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.47s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-784312 node start m03 --alsologtostderr: (12.05601181s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (120.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-784312
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-784312
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-784312: (25.125296243s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-784312 --wait=true -v=8 --alsologtostderr
E1127 23:58:20.411491 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
E1127 23:59:23.120427 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-784312 --wait=true -v=8 --alsologtostderr: (1m35.064342541s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-784312
--- PASS: TestMultiNode/serial/RestartKeepsNodes (120.34s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-784312 node delete m03: (4.391236373s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-784312 stop: (23.849336533s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-784312 status: exit status 7 (105.399774ms)

                                                
                                                
-- stdout --
	multinode-784312
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-784312-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-784312 status --alsologtostderr: exit status 7 (106.035214ms)

                                                
                                                
-- stdout --
	multinode-784312
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-784312-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 23:59:58.435432 1543443 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:59:58.436697 1543443 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:59:58.436710 1543443 out.go:309] Setting ErrFile to fd 2...
	I1127 23:59:58.436717 1543443 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:59:58.436988 1543443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-1455288/.minikube/bin
	I1127 23:59:58.437183 1543443 out.go:303] Setting JSON to false
	I1127 23:59:58.437276 1543443 mustload.go:65] Loading cluster: multinode-784312
	I1127 23:59:58.437719 1543443 config.go:182] Loaded profile config "multinode-784312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:59:58.437736 1543443 status.go:255] checking status of multinode-784312 ...
	I1127 23:59:58.438270 1543443 notify.go:220] Checking for updates...
	I1127 23:59:58.438357 1543443 cli_runner.go:164] Run: docker container inspect multinode-784312 --format={{.State.Status}}
	I1127 23:59:58.456132 1543443 status.go:330] multinode-784312 host status = "Stopped" (err=<nil>)
	I1127 23:59:58.456155 1543443 status.go:343] host is not running, skipping remaining checks
	I1127 23:59:58.456162 1543443 status.go:257] multinode-784312 status: &{Name:multinode-784312 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1127 23:59:58.456194 1543443 status.go:255] checking status of multinode-784312-m02 ...
	I1127 23:59:58.456485 1543443 cli_runner.go:164] Run: docker container inspect multinode-784312-m02 --format={{.State.Status}}
	I1127 23:59:58.474420 1543443 status.go:330] multinode-784312-m02 host status = "Stopped" (err=<nil>)
	I1127 23:59:58.474444 1543443 status.go:343] host is not running, skipping remaining checks
	I1127 23:59:58.474451 1543443 status.go:257] multinode-784312-m02 status: &{Name:multinode-784312-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (79.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-784312 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-784312 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m18.33907933s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-784312 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (79.22s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-784312
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-784312-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-784312-m02 --driver=docker  --container-runtime=crio: exit status 14 (107.060634ms)

                                                
                                                
-- stdout --
	* [multinode-784312-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-1455288/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-784312-m02' is duplicated with machine name 'multinode-784312-m02' in profile 'multinode-784312'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-784312-m03 --driver=docker  --container-runtime=crio
E1128 00:01:33.163357 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-784312-m03 --driver=docker  --container-runtime=crio: (31.514368331s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-784312
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-784312: exit status 80 (380.214061ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-784312
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-784312-m03 already exists in multinode-784312-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-784312-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-784312-m03: (2.154473452s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.23s)

                                                
                                    
x
+
TestPreload (175.2s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-169927 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1128 00:02:56.206693 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-169927 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m21.189726798s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-169927 image pull gcr.io/k8s-minikube/busybox
E1128 00:03:20.411160 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-169927 image pull gcr.io/k8s-minikube/busybox: (2.40958862s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-169927
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-169927: (5.872718622s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-169927 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1128 00:04:23.120322 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-169927 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m23.064196736s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-169927 image list
helpers_test.go:175: Cleaning up "test-preload-169927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-169927
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-169927: (2.39642159s)
--- PASS: TestPreload (175.20s)

                                                
                                    
x
+
TestInsufficientStorage (10.97s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-590095 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-590095 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.358015826s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3c50b634-e853-414d-b016-5eb78b6828da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-590095] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"67ff436f-3733-4386-9aa6-f78b8cf860e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17206"}}
	{"specversion":"1.0","id":"8d8335c8-3e2c-419b-83e5-866b9d5c7100","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"36d2a061-6547-4147-a1ef-e2e428655b0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17206-1455288/kubeconfig"}}
	{"specversion":"1.0","id":"d4c76711-f858-4553-8c4d-895668ead394","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube"}}
	{"specversion":"1.0","id":"61a3dbbe-6835-459d-8698-37846d134c31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"45e14530-2d82-4d53-8b9a-ee3509cd513a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"44050a48-a526-4872-9d32-cc643277487a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"cedfd0ea-a4eb-49e1-8e24-aaa8df7051af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"55b8611b-8ac4-4d92-bcd3-969bd839308b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"44cbfe12-6e8c-4e9a-a00d-223246fdeb07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"990577dc-4be0-4e1a-ad87-22013808a141","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-590095 in cluster insufficient-storage-590095","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3fc53a69-52a3-4a67-a223-73ce661cd205","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"625e89da-a56a-47c3-8e92-94b7af1d0cc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0b4f5e41-d802-473e-aea7-b342102f4250","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-590095 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-590095 --output=json --layout=cluster: exit status 7 (330.844175ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-590095","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-590095","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:05:36.402747 1559913 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-590095" does not appear in /home/jenkins/minikube-integration/17206-1455288/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-590095 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-590095 --output=json --layout=cluster: exit status 7 (333.196711ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-590095","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-590095","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:05:36.737612 1559968 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-590095" does not appear in /home/jenkins/minikube-integration/17206-1455288/kubeconfig
	E1128 00:05:36.750218 1559968 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/insufficient-storage-590095/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-590095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-590095
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-590095: (1.950063347s)
--- PASS: TestInsufficientStorage (10.97s)

                                                
                                    
x
+
TestKubernetesUpgrade (373.62s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-020854 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-020854 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m2.499490729s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-020854
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-020854: (1.300956584s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-020854 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-020854 status --format={{.Host}}: exit status 7 (80.918172ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-020854 --memory=2200 --kubernetes-version=v1.29.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1128 00:08:20.411807 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
E1128 00:09:23.120377 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-020854 --memory=2200 --kubernetes-version=v1.29.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m40.882346122s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-020854 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-020854 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-020854 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (93.589972ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-020854] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-1455288/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-020854
	    minikube start -p kubernetes-upgrade-020854 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0208542 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-020854 --kubernetes-version=v1.29.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-020854 --memory=2200 --kubernetes-version=v1.29.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-020854 --memory=2200 --kubernetes-version=v1.29.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.256196859s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-020854" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-020854
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-020854: (2.420106114s)
--- PASS: TestKubernetesUpgrade (373.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-071422 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-071422 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (104.922747ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-071422] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-1455288/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-071422 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-071422 --driver=docker  --container-runtime=crio: (44.631456188s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-071422 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (14.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-071422 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-071422 --no-kubernetes --driver=docker  --container-runtime=crio: (11.191930432s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-071422 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-071422 status -o json: exit status 2 (526.238559ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-071422","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-071422
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-071422: (2.444804066s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (14.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-071422 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-071422 --no-kubernetes --driver=docker  --container-runtime=crio: (9.267534365s)
--- PASS: TestNoKubernetes/serial/Start (9.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-071422 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-071422 "sudo systemctl is-active --quiet service kubelet": exit status 1 (303.94706ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-071422
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-071422: (1.235221652s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-071422 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-071422 --driver=docker  --container-runtime=crio: (7.897143073s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-071422 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-071422 "sudo systemctl is-active --quiet service kubelet": exit status 1 (295.966525ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.45s)

                                                
                                    
x
+
TestPause/serial/Start (76s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-694851 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-694851 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m16.003747711s)
--- PASS: TestPause/serial/Start (76.00s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (35.62s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-694851 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1128 00:16:33.163355 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-694851 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.600722853s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (35.62s)

                                                
                                    
x
+
TestPause/serial/Pause (0.85s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-694851 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.85s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-694851 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-694851 --output=json --layout=cluster: exit status 2 (391.549702ms)

                                                
                                                
-- stdout --
	{"Name":"pause-694851","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-694851","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-694851 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.81s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.03s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-694851 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-694851 --alsologtostderr -v=5: (1.02715255s)
--- PASS: TestPause/serial/PauseAgain (1.03s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.88s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-694851 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-694851 --alsologtostderr -v=5: (2.878991681s)
--- PASS: TestPause/serial/DeletePaused (2.88s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-694851
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-694851: exit status 1 (17.732667ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-694851: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-079523 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-079523 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (204.96092ms)

                                                
                                                
-- stdout --
	* [false-079523] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-1455288/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 00:17:38.434086 1595857 out.go:296] Setting OutFile to fd 1 ...
	I1128 00:17:38.434242 1595857 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:17:38.434252 1595857 out.go:309] Setting ErrFile to fd 2...
	I1128 00:17:38.434259 1595857 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:17:38.434530 1595857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-1455288/.minikube/bin
	I1128 00:17:38.434966 1595857 out.go:303] Setting JSON to false
	I1128 00:17:38.435975 1595857 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25208,"bootTime":1701105451,"procs":257,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1128 00:17:38.436061 1595857 start.go:138] virtualization:  
	I1128 00:17:38.438607 1595857 out.go:177] * [false-079523] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1128 00:17:38.441043 1595857 out.go:177]   - MINIKUBE_LOCATION=17206
	I1128 00:17:38.441195 1595857 notify.go:220] Checking for updates...
	I1128 00:17:38.445455 1595857 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 00:17:38.447169 1595857 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-1455288/kubeconfig
	I1128 00:17:38.448807 1595857 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-1455288/.minikube
	I1128 00:17:38.450824 1595857 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1128 00:17:38.452790 1595857 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 00:17:38.454973 1595857 config.go:182] Loaded profile config "stopped-upgrade-714093": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1128 00:17:38.455065 1595857 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 00:17:38.479969 1595857 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1128 00:17:38.480086 1595857 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 00:17:38.562128 1595857 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:55 SystemTime:2023-11-28 00:17:38.552037019 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 00:17:38.562234 1595857 docker.go:295] overlay module found
	I1128 00:17:38.564226 1595857 out.go:177] * Using the docker driver based on user configuration
	I1128 00:17:38.565815 1595857 start.go:298] selected driver: docker
	I1128 00:17:38.565829 1595857 start.go:902] validating driver "docker" against <nil>
	I1128 00:17:38.566042 1595857 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 00:17:38.568550 1595857 out.go:177] 
	W1128 00:17:38.570393 1595857 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1128 00:17:38.572140 1595857 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-079523 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-079523

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-079523

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-079523

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-079523

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-079523

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-079523

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-079523

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-079523

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-079523

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-079523

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-079523

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-079523" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-079523" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-079523

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079523"

                                                
                                                
----------------------- debugLogs end: false-079523 [took: 3.908004558s] --------------------------------
helpers_test.go:175: Cleaning up "false-079523" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-079523
--- PASS: TestNetworkPlugins/group/false (4.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (120.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-428712 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1128 00:23:20.412174 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
E1128 00:24:23.120301 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-428712 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m0.634500257s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (120.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-428712 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d90f4a1d-ab44-4fc3-87a8-a03795dca37d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d90f4a1d-ab44-4fc3-87a8-a03795dca37d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.037221164s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-428712 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-428712 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-428712 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-428712 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-428712 --alsologtostderr -v=3: (12.084970706s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-428712 -n old-k8s-version-428712
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-428712 -n old-k8s-version-428712: exit status 7 (93.778379ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-428712 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (432.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-428712 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1128 00:26:33.164050 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
E1128 00:28:03.461120 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
E1128 00:28:20.411676 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
E1128 00:29:23.120495 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
E1128 00:31:33.163503 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-428712 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m11.727318982s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-428712 -n old-k8s-version-428712
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (432.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-mklr9" [7bff141a-9af3-44c1-a687-788bd1dca64b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.023620045s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-mklr9" [7bff141a-9af3-44c1-a687-788bd1dca64b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012144384s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-428712 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-428712 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-428712 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-428712 -n old-k8s-version-428712
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-428712 -n old-k8s-version-428712: exit status 2 (344.581323ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-428712 -n old-k8s-version-428712
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-428712 -n old-k8s-version-428712: exit status 2 (358.500579ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-428712 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-428712 -n old-k8s-version-428712
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-428712 -n old-k8s-version-428712
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (64.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-313354 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0
E1128 00:33:20.412018 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-313354 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0: (1m4.565144499s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (64.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-313354 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f57642e3-0cfb-44b4-9746-e835e9c9ffaf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f57642e3-0cfb-44b4-9746-e835e9c9ffaf] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.024799546s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-313354 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-313354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-313354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.030478955s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-313354 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-313354 --alsologtostderr -v=3
E1128 00:34:23.120373 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-313354 --alsologtostderr -v=3: (12.111256561s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-313354 -n no-preload-313354
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-313354 -n no-preload-313354: exit status 7 (95.189077ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-313354 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (353.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-313354 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0
E1128 00:35:05.559287 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/old-k8s-version-428712/client.crt: no such file or directory
E1128 00:35:05.564806 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/old-k8s-version-428712/client.crt: no such file or directory
E1128 00:35:05.575237 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/old-k8s-version-428712/client.crt: no such file or directory
E1128 00:35:05.595505 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/old-k8s-version-428712/client.crt: no such file or directory
E1128 00:35:05.636301 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/old-k8s-version-428712/client.crt: no such file or directory
E1128 00:35:05.716588 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/old-k8s-version-428712/client.crt: no such file or directory
E1128 00:35:05.877484 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/old-k8s-version-428712/client.crt: no such file or directory
E1128 00:35:06.198568 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/old-k8s-version-428712/client.crt: no such file or directory
E1128 00:35:06.839055 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/old-k8s-version-428712/client.crt: no such file or directory
E1128 00:35:08.119911 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/old-k8s-version-428712/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-313354 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0: (5m52.472784911s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-313354 -n no-preload-313354
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (353.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-47nwg" [1b40f0ee-4c45-4c4f-8b73-cf6658e6dc99] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-47nwg" [1b40f0ee-4c45-4c4f-8b73-cf6658e6dc99] Running
E1128 00:40:33.243720 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/old-k8s-version-428712/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.043344283s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-47nwg" [1b40f0ee-4c45-4c4f-8b73-cf6658e6dc99] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010599178s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-313354 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-313354 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-313354 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-313354 -n no-preload-313354
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-313354 -n no-preload-313354: exit status 2 (404.473426ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-313354 -n no-preload-313354
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-313354 -n no-preload-313354: exit status 2 (380.211048ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-313354 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-313354 -n no-preload-313354
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-313354 -n no-preload-313354
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (83.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-190619 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1128 00:41:33.163406 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-190619 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m23.486588817s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (83.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-190619 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0b140be6-c715-47b0-a6fb-e954e8f30651] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0b140be6-c715-47b0-a6fb-e954e8f30651] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.02514319s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-190619 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-190619 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-190619 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.044745102s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-190619 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-190619 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-190619 --alsologtostderr -v=3: (12.057356715s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-190619 -n embed-certs-190619
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-190619 -n embed-certs-190619: exit status 7 (91.248486ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-190619 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (352.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-190619 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1128 00:43:20.411284 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
E1128 00:44:02.358618 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/no-preload-313354/client.crt: no such file or directory
E1128 00:44:02.364296 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/no-preload-313354/client.crt: no such file or directory
E1128 00:44:02.374615 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/no-preload-313354/client.crt: no such file or directory
E1128 00:44:02.395684 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/no-preload-313354/client.crt: no such file or directory
E1128 00:44:02.436006 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/no-preload-313354/client.crt: no such file or directory
E1128 00:44:02.516844 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/no-preload-313354/client.crt: no such file or directory
E1128 00:44:02.676956 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/no-preload-313354/client.crt: no such file or directory
E1128 00:44:02.997590 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/no-preload-313354/client.crt: no such file or directory
E1128 00:44:03.638432 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/no-preload-313354/client.crt: no such file or directory
E1128 00:44:04.919251 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/no-preload-313354/client.crt: no such file or directory
E1128 00:44:07.479852 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/no-preload-313354/client.crt: no such file or directory
E1128 00:44:12.600870 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/no-preload-313354/client.crt: no such file or directory
E1128 00:44:22.841488 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/no-preload-313354/client.crt: no such file or directory
E1128 00:44:23.119769 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
E1128 00:44:43.322482 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/no-preload-313354/client.crt: no such file or directory
E1128 00:44:43.461614 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
E1128 00:45:05.559414 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/old-k8s-version-428712/client.crt: no such file or directory
E1128 00:45:24.283131 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/no-preload-313354/client.crt: no such file or directory
E1128 00:46:33.163826 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
E1128 00:46:46.203302 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/no-preload-313354/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-190619 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m51.554639866s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-190619 -n embed-certs-190619
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (352.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-714093
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-674956 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1128 00:48:20.412129 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-674956 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m29.569465467s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (17.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jcbkl" [7d9195be-77ed-43c7-abd3-0fd990276b2f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jcbkl" [7d9195be-77ed-43c7-abd3-0fd990276b2f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.095411989s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (17.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jcbkl" [7d9195be-77ed-43c7-abd3-0fd990276b2f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014569814s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-190619 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-190619 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-190619 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-190619 -n embed-certs-190619
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-190619 -n embed-certs-190619: exit status 2 (370.284342ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-190619 -n embed-certs-190619
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-190619 -n embed-certs-190619: exit status 2 (373.548575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-190619 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-190619 -n embed-certs-190619
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-190619 -n embed-certs-190619
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-917319 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0
E1128 00:49:02.358527 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/no-preload-313354/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-917319 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0: (44.709853291s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-674956 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b58df441-0d50-4781-a384-45bd8b993e0a] Pending
helpers_test.go:344: "busybox" [b58df441-0d50-4781-a384-45bd8b993e0a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b58df441-0d50-4781-a384-45bd8b993e0a] Running
E1128 00:49:23.119769 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.050206671s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-674956 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-674956 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-674956 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.178662021s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-674956 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-674956 --alsologtostderr -v=3
E1128 00:49:30.044405 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/no-preload-313354/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-674956 --alsologtostderr -v=3: (12.258861766s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-917319 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-917319 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.096799983s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-917319 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-917319 --alsologtostderr -v=3: (1.313188885s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-917319 -n newest-cni-917319
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-917319 -n newest-cni-917319: exit status 7 (91.827597ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-917319 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-917319 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-917319 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0: (38.00260048s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-917319 -n newest-cni-917319
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-674956 -n default-k8s-diff-port-674956
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-674956 -n default-k8s-diff-port-674956: exit status 7 (172.179215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-674956 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (356.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-674956 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1128 00:50:05.560120 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/old-k8s-version-428712/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-674956 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m55.613326643s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-674956 -n default-k8s-diff-port-674956
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (356.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-917319 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-917319 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-917319 -n newest-cni-917319
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-917319 -n newest-cni-917319: exit status 2 (360.213625ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-917319 -n newest-cni-917319
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-917319 -n newest-cni-917319: exit status 2 (364.792439ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-917319 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-917319 -n newest-cni-917319
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-917319 -n newest-cni-917319
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (74.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-079523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1128 00:51:28.604646 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/old-k8s-version-428712/client.crt: no such file or directory
E1128 00:51:33.163441 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-079523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m14.776504483s)
--- PASS: TestNetworkPlugins/group/auto/Start (74.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-079523 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-079523 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nxlmr" [272b1af7-d547-47f1-af0f-9df70d54990c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-nxlmr" [272b1af7-d547-47f1-af0f-9df70d54990c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.017778704s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-079523 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-079523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-079523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (79.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-079523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1128 00:52:56.207785 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
E1128 00:53:20.411545 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/addons-606180/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-079523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m19.82050484s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (79.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-lpjrv" [94fc8e57-bb70-43d7-98ec-e0cbf9ef0e55] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.034357723s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-079523 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-079523 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-f2kjr" [9f43ebd9-4ae3-44cf-aa0d-34a62c3a8943] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-f2kjr" [9f43ebd9-4ae3-44cf-aa0d-34a62c3a8943] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.010558807s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-079523 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-079523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-079523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (76.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-079523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1128 00:54:23.119724 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
E1128 00:55:05.559183 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/old-k8s-version-428712/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-079523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m16.516377323s)
--- PASS: TestNetworkPlugins/group/calico/Start (76.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-gd7xs" [9743625e-de49-452b-98a1-abfd12db0301] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.037232969s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-079523 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-079523 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-blhf8" [257d6d5f-0bf5-4c2c-b530-5a1db7bd17e5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-blhf8" [257d6d5f-0bf5-4c2c-b530-5a1db7bd17e5] Running
E1128 00:55:46.166158 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.01402464s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-b79nk" [99033007-51e4-481a-8dcc-184a910c47d1] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-b79nk" [99033007-51e4-481a-8dcc-184a910c47d1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.075784067s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-b79nk" [99033007-51e4-481a-8dcc-184a910c47d1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014453359s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-674956 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-079523 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-079523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-079523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-674956 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-674956 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-674956 -n default-k8s-diff-port-674956
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-674956 -n default-k8s-diff-port-674956: exit status 2 (370.203526ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-674956 -n default-k8s-diff-port-674956
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-674956 -n default-k8s-diff-port-674956: exit status 2 (458.68939ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-674956 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-674956 --alsologtostderr -v=1: (1.223807481s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-674956 -n default-k8s-diff-port-674956
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-674956 -n default-k8s-diff-port-674956
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (69.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-079523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-079523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m9.912313555s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (69.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (93.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-079523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1128 00:56:33.163539 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/functional-428453/client.crt: no such file or directory
E1128 00:56:39.673221 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/auto-079523/client.crt: no such file or directory
E1128 00:56:39.678551 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/auto-079523/client.crt: no such file or directory
E1128 00:56:39.688829 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/auto-079523/client.crt: no such file or directory
E1128 00:56:39.709073 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/auto-079523/client.crt: no such file or directory
E1128 00:56:39.749342 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/auto-079523/client.crt: no such file or directory
E1128 00:56:39.829588 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/auto-079523/client.crt: no such file or directory
E1128 00:56:39.990025 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/auto-079523/client.crt: no such file or directory
E1128 00:56:40.310537 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/auto-079523/client.crt: no such file or directory
E1128 00:56:40.950705 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/auto-079523/client.crt: no such file or directory
E1128 00:56:42.231104 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/auto-079523/client.crt: no such file or directory
E1128 00:56:44.792015 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/auto-079523/client.crt: no such file or directory
E1128 00:56:49.913052 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/auto-079523/client.crt: no such file or directory
E1128 00:57:00.164578 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/auto-079523/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-079523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m33.480728203s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (93.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-079523 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-079523 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-sfw9v" [540c3510-7cec-4162-a249-f6609262a938] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-sfw9v" [540c3510-7cec-4162-a249-f6609262a938] Running
E1128 00:57:20.645688 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/auto-079523/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.015678106s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-079523 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-079523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-079523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-079523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-079523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (54.603457566s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-079523 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-079523 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kkqbm" [b7fc74ca-fcc7-4fea-9b4f-ac7591e2bd79] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kkqbm" [b7fc74ca-fcc7-4fea-9b4f-ac7591e2bd79] Running
E1128 00:58:01.606309 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/auto-079523/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.018609901s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-079523 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-079523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-079523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (49.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-079523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1128 00:58:33.320285 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/kindnet-079523/client.crt: no such file or directory
E1128 00:58:33.960615 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/kindnet-079523/client.crt: no such file or directory
E1128 00:58:35.240786 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/kindnet-079523/client.crt: no such file or directory
E1128 00:58:37.801012 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/kindnet-079523/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-079523 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (49.225120956s)
--- PASS: TestNetworkPlugins/group/bridge/Start (49.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-4jnbv" [8760ba59-2c6e-4a79-94dc-d8d05b52c355] Running
E1128 00:58:42.921962 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/kindnet-079523/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.108262144s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-079523 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-079523 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cw9qn" [f83a4373-4c68-4e24-9de6-677ebbe87e6e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-cw9qn" [f83a4373-4c68-4e24-9de6-677ebbe87e6e] Running
E1128 00:58:53.162941 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/kindnet-079523/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.014307507s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-079523 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-079523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-079523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-079523 "pgrep -a kubelet"
E1128 00:59:22.411204 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/default-k8s-diff-port-674956/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-079523 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jk8gt" [2b9bab61-3053-44b5-8fce-3b1fa06c04af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1128 00:59:23.119943 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/ingress-addon-legacy-684553/client.crt: no such file or directory
E1128 00:59:23.527032 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/auto-079523/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-jk8gt" [2b9bab61-3053-44b5-8fce-3b1fa06c04af] Running
E1128 00:59:27.532248 1460652 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-1455288/.minikube/profiles/default-k8s-diff-port-674956/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.011016269s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-079523 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-079523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-079523 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    

Test skip (32/314)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.82s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-108856 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-108856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-108856
--- SKIP: TestDownloadOnlyKic (0.82s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-221016" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-221016
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-079523 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-079523

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-079523

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-079523

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-079523

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-079523

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-079523

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-079523

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-079523

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-079523

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-079523

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-079523

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-079523" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-079523" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-079523

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079523"

                                                
                                                
----------------------- debugLogs end: kubenet-079523 [took: 3.90860571s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-079523" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-079523
--- SKIP: TestNetworkPlugins/group/kubenet (4.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-079523 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-079523

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-079523

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-079523

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-079523

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-079523

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-079523

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-079523

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-079523

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-079523

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-079523

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-079523

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-079523" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-079523

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-079523

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-079523

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-079523

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-079523" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-079523" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-079523

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-079523" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079523"

                                                
                                                
----------------------- debugLogs end: cilium-079523 [took: 4.553340773s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-079523" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-079523
--- SKIP: TestNetworkPlugins/group/cilium (4.73s)

                                                
                                    
Copied to clipboard